Test Report: QEMU_macOS 19662

                    
                      3f64d3c641e64b460ff7a3cff080aebef74ca5ca:2024-09-17:36258
                    
                

Test fail (98/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.33
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.25
33 TestAddons/parallel/Registry 71.38
46 TestCertOptions 10.22
47 TestCertExpiration 195.5
48 TestDockerFlags 10.33
49 TestForceSystemdFlag 10.17
50 TestForceSystemdEnv 10.49
95 TestFunctional/parallel/ServiceCmdConnect 39.37
167 TestMultiControlPlane/serial/StopSecondaryNode 214.15
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 104.5
169 TestMultiControlPlane/serial/RestartSecondaryNode 208.45
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.37
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 202.07
175 TestMultiControlPlane/serial/RestartCluster 5.27
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.3
184 TestJSONOutput/start/Command 9.81
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.05
213 TestMinikubeProfile 10.21
216 TestMountStart/serial/StartWithMountFirst 9.92
219 TestMultiNode/serial/FreshStart2Nodes 9.96
220 TestMultiNode/serial/DeployApp2Nodes 108.2
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 49.37
228 TestMultiNode/serial/RestartKeepsNodes 9.23
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 3.09
231 TestMultiNode/serial/RestartMultiNode 5.26
232 TestMultiNode/serial/ValidateNameConflict 20.41
236 TestPreload 10.03
238 TestScheduledStopUnix 10.16
239 TestSkaffold 12.4
242 TestRunningBinaryUpgrade 597.56
244 TestKubernetesUpgrade 17.36
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.41
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.08
260 TestStoppedBinaryUpgrade/Upgrade 572.87
262 TestPause/serial/Start 9.86
272 TestNoKubernetes/serial/StartWithK8s 9.83
273 TestNoKubernetes/serial/StartWithStopK8s 5.31
274 TestNoKubernetes/serial/Start 5.3
278 TestNoKubernetes/serial/StartNoArgs 5.33
280 TestNetworkPlugins/group/auto/Start 9.83
281 TestNetworkPlugins/group/calico/Start 9.95
282 TestNetworkPlugins/group/custom-flannel/Start 9.83
283 TestNetworkPlugins/group/false/Start 9.92
284 TestNetworkPlugins/group/kindnet/Start 9.87
285 TestNetworkPlugins/group/flannel/Start 9.79
286 TestNetworkPlugins/group/enable-default-cni/Start 9.91
287 TestNetworkPlugins/group/bridge/Start 9.88
288 TestNetworkPlugins/group/kubenet/Start 9.78
291 TestStartStop/group/old-k8s-version/serial/FirstStart 9.95
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 9.97
303 TestStartStop/group/no-preload/serial/DeployApp 0.1
305 TestStartStop/group/embed-certs/serial/FirstStart 10.15
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.17
309 TestStartStop/group/no-preload/serial/SecondStart 7.13
310 TestStartStop/group/embed-certs/serial/DeployApp 0.1
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
315 TestStartStop/group/no-preload/serial/Pause 0.1
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.15
320 TestStartStop/group/embed-certs/serial/SecondStart 7.32
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
326 TestStartStop/group/embed-certs/serial/Pause 0.1
329 TestStartStop/group/newest-cni/serial/FirstStart 10.06
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 7.32
334 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
335 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/SecondStart 5.25
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (12.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-345000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-345000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (12.325430042s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d92d6436-d4a3-40aa-ac15-bf44c6b0b64c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-345000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a2ac9940-c02a-4b33-8b4b-b2a62004aa18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"9ac84ea0-7d49-48f2-953b-f7d1a6566fe1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig"}}
	{"specversion":"1.0","id":"a207bb63-50f3-43d5-9e1e-c0a2bdfcda67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4c518299-f843-4b50-b935-f110ba4b2873","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e36a7fa0-bca2-4980-b61f-50e6779fc5b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube"}}
	{"specversion":"1.0","id":"ae2e1c36-350e-4a1e-b796-de5107d2d72d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"e68eed8f-680d-4562-955b-0cebca8ec23d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4600251-3c41-4abb-9b8d-cd864416925b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"79606fcc-1ae4-41aa-930a-18e8a0072139","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd3b876b-35b7-4fa7-a12c-74d0a6c3763d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-345000\" primary control-plane node in \"download-only-345000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9b4ea889-735f-4149-92d0-a03ee824cc3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec745c2a-b684-47a0-8224-eeaa7cb78396","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106c0d780 0x106c0d780 0x106c0d780 0x106c0d780 0x106c0d780 0x106c0d780 0x106c0d780] Decompressors:map[bz2:0x140005afd10 gz:0x140005afd18 tar:0x140005afca0 tar.bz2:0x140005afcd0 tar.gz:0x140005afce0 tar.xz:0x140005afcf0 tar.zst:0x140005afd00 tbz2:0x140005afcd0 tgz:0x14
0005afce0 txz:0x140005afcf0 tzst:0x140005afd00 xz:0x140005afd20 zip:0x140005afd30 zst:0x140005afd28] Getters:map[file:0x14001422550 http:0x140005f20a0 https:0x140005f2190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"677725ee-0b10-4682-bea3-80929ae64291","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 09:55:21.167437    1842 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:55:21.167589    1842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:55:21.167592    1842 out.go:358] Setting ErrFile to fd 2...
	I0917 09:55:21.167594    1842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:55:21.167741    1842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	W0917 09:55:21.167839    1842 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19662-1312/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19662-1312/.minikube/config/config.json: no such file or directory
	I0917 09:55:21.169091    1842 out.go:352] Setting JSON to true
	I0917 09:55:21.186649    1842 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1484,"bootTime":1726590637,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 09:55:21.186718    1842 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 09:55:21.192043    1842 out.go:97] [download-only-345000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 09:55:21.192202    1842 notify.go:220] Checking for updates...
	W0917 09:55:21.192245    1842 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 09:55:21.195818    1842 out.go:169] MINIKUBE_LOCATION=19662
	I0917 09:55:21.202066    1842 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 09:55:21.207062    1842 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 09:55:21.211009    1842 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 09:55:21.213992    1842 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	W0917 09:55:21.219900    1842 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 09:55:21.220089    1842 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 09:55:21.225074    1842 out.go:97] Using the qemu2 driver based on user configuration
	I0917 09:55:21.225095    1842 start.go:297] selected driver: qemu2
	I0917 09:55:21.225111    1842 start.go:901] validating driver "qemu2" against <nil>
	I0917 09:55:21.225206    1842 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 09:55:21.229024    1842 out.go:169] Automatically selected the socket_vmnet network
	I0917 09:55:21.234522    1842 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0917 09:55:21.234608    1842 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 09:55:21.234634    1842 cni.go:84] Creating CNI manager for ""
	I0917 09:55:21.234667    1842 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 09:55:21.234718    1842 start.go:340] cluster config:
	{Name:download-only-345000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-345000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:55:21.239852    1842 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 09:55:21.244016    1842 out.go:97] Downloading VM boot image ...
	I0917 09:55:21.244031    1842 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso
	I0917 09:55:26.768430    1842 out.go:97] Starting "download-only-345000" primary control-plane node in "download-only-345000" cluster
	I0917 09:55:26.768452    1842 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 09:55:26.825947    1842 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 09:55:26.825955    1842 cache.go:56] Caching tarball of preloaded images
	I0917 09:55:26.826173    1842 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 09:55:26.831233    1842 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0917 09:55:26.831239    1842 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 09:55:26.914161    1842 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 09:55:32.181480    1842 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 09:55:32.181643    1842 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 09:55:32.884923    1842 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 09:55:32.885120    1842 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/download-only-345000/config.json ...
	I0917 09:55:32.885136    1842 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/download-only-345000/config.json: {Name:mkd7327fcf68477decfb54ee13291a63ff74676c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:55:32.885393    1842 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 09:55:32.885589    1842 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0917 09:55:33.419858    1842 out.go:193] 
	W0917 09:55:33.426918    1842 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106c0d780 0x106c0d780 0x106c0d780 0x106c0d780 0x106c0d780 0x106c0d780 0x106c0d780] Decompressors:map[bz2:0x140005afd10 gz:0x140005afd18 tar:0x140005afca0 tar.bz2:0x140005afcd0 tar.gz:0x140005afce0 tar.xz:0x140005afcf0 tar.zst:0x140005afd00 tbz2:0x140005afcd0 tgz:0x140005afce0 txz:0x140005afcf0 tzst:0x140005afd00 xz:0x140005afd20 zip:0x140005afd30 zst:0x140005afd28] Getters:map[file:0x14001422550 http:0x140005f20a0 https:0x140005f2190] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0917 09:55:33.426940    1842 out_reason.go:110] 
	W0917 09:55:33.434854    1842 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 09:55:33.438754    1842 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-345000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (12.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-349000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-349000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.099738375s)

                                                
                                                
-- stdout --
	* [offline-docker-349000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-349000" primary control-plane node in "offline-docker-349000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-349000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:41:33.449002    4462 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:41:33.449183    4462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:41:33.449189    4462 out.go:358] Setting ErrFile to fd 2...
	I0917 10:41:33.449191    4462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:41:33.449340    4462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:41:33.450562    4462 out.go:352] Setting JSON to false
	I0917 10:41:33.468050    4462 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4256,"bootTime":1726590637,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:41:33.468122    4462 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:41:33.473495    4462 out.go:177] * [offline-docker-349000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:41:33.481232    4462 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:41:33.481254    4462 notify.go:220] Checking for updates...
	I0917 10:41:33.487313    4462 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:41:33.488469    4462 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:41:33.491312    4462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:41:33.494270    4462 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:41:33.497290    4462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:41:33.500733    4462 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:41:33.500784    4462 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:41:33.504263    4462 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:41:33.511296    4462 start.go:297] selected driver: qemu2
	I0917 10:41:33.511309    4462 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:41:33.511317    4462 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:41:33.513079    4462 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:41:33.516368    4462 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:41:33.519392    4462 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:41:33.519408    4462 cni.go:84] Creating CNI manager for ""
	I0917 10:41:33.519428    4462 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:41:33.519431    4462 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:41:33.519467    4462 start.go:340] cluster config:
	{Name:offline-docker-349000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:41:33.523106    4462 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:33.530303    4462 out.go:177] * Starting "offline-docker-349000" primary control-plane node in "offline-docker-349000" cluster
	I0917 10:41:33.533244    4462 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:41:33.533272    4462 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:41:33.533282    4462 cache.go:56] Caching tarball of preloaded images
	I0917 10:41:33.533366    4462 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:41:33.533371    4462 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:41:33.533437    4462 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/offline-docker-349000/config.json ...
	I0917 10:41:33.533451    4462 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/offline-docker-349000/config.json: {Name:mkbe811fbf7323d23643a172db1bf10b6eb4aaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:41:33.533744    4462 start.go:360] acquireMachinesLock for offline-docker-349000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:41:33.533788    4462 start.go:364] duration metric: took 33.708µs to acquireMachinesLock for "offline-docker-349000"
	I0917 10:41:33.533800    4462 start.go:93] Provisioning new machine with config: &{Name:offline-docker-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:41:33.533831    4462 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:41:33.541289    4462 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:41:33.556951    4462 start.go:159] libmachine.API.Create for "offline-docker-349000" (driver="qemu2")
	I0917 10:41:33.557002    4462 client.go:168] LocalClient.Create starting
	I0917 10:41:33.557063    4462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:41:33.557097    4462 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:33.557111    4462 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:33.557159    4462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:41:33.557184    4462 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:33.557192    4462 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:33.557558    4462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:41:33.718292    4462 main.go:141] libmachine: Creating SSH key...
	I0917 10:41:33.815404    4462 main.go:141] libmachine: Creating Disk image...
	I0917 10:41:33.815415    4462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:41:33.816027    4462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/disk.qcow2
	I0917 10:41:33.825675    4462 main.go:141] libmachine: STDOUT: 
	I0917 10:41:33.825704    4462 main.go:141] libmachine: STDERR: 
	I0917 10:41:33.825769    4462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/disk.qcow2 +20000M
	I0917 10:41:33.834487    4462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:41:33.834508    4462 main.go:141] libmachine: STDERR: 
	I0917 10:41:33.834533    4462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/disk.qcow2
	I0917 10:41:33.834538    4462 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:41:33.834551    4462 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:41:33.834586    4462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:97:a5:56:29:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/disk.qcow2
	I0917 10:41:33.836182    4462 main.go:141] libmachine: STDOUT: 
	I0917 10:41:33.836196    4462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:41:33.836220    4462 client.go:171] duration metric: took 279.220959ms to LocalClient.Create
	I0917 10:41:35.838241    4462 start.go:128] duration metric: took 2.304473208s to createHost
	I0917 10:41:35.838260    4462 start.go:83] releasing machines lock for "offline-docker-349000", held for 2.304538291s
	W0917 10:41:35.838279    4462 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:35.843338    4462 out.go:177] * Deleting "offline-docker-349000" in qemu2 ...
	W0917 10:41:35.861994    4462 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:35.862007    4462 start.go:729] Will try again in 5 seconds ...
	I0917 10:41:40.864026    4462 start.go:360] acquireMachinesLock for offline-docker-349000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:41:40.864187    4462 start.go:364] duration metric: took 116.667µs to acquireMachinesLock for "offline-docker-349000"
	I0917 10:41:40.864226    4462 start.go:93] Provisioning new machine with config: &{Name:offline-docker-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:41:40.864314    4462 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:41:40.873517    4462 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:41:40.903218    4462 start.go:159] libmachine.API.Create for "offline-docker-349000" (driver="qemu2")
	I0917 10:41:40.903255    4462 client.go:168] LocalClient.Create starting
	I0917 10:41:40.903354    4462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:41:40.903408    4462 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:40.903422    4462 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:40.903476    4462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:41:40.903513    4462 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:40.903524    4462 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:40.904014    4462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:41:41.069397    4462 main.go:141] libmachine: Creating SSH key...
	I0917 10:41:41.458069    4462 main.go:141] libmachine: Creating Disk image...
	I0917 10:41:41.458085    4462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:41:41.458307    4462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/disk.qcow2
	I0917 10:41:41.467669    4462 main.go:141] libmachine: STDOUT: 
	I0917 10:41:41.467686    4462 main.go:141] libmachine: STDERR: 
	I0917 10:41:41.467750    4462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/disk.qcow2 +20000M
	I0917 10:41:41.475497    4462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:41:41.475527    4462 main.go:141] libmachine: STDERR: 
	I0917 10:41:41.475544    4462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/disk.qcow2
	I0917 10:41:41.475549    4462 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:41:41.475556    4462 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:41:41.475594    4462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:17:93:37:1c:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/offline-docker-349000/disk.qcow2
	I0917 10:41:41.477138    4462 main.go:141] libmachine: STDOUT: 
	I0917 10:41:41.477151    4462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:41:41.477166    4462 client.go:171] duration metric: took 573.923208ms to LocalClient.Create
	I0917 10:41:43.477298    4462 start.go:128] duration metric: took 2.613039833s to createHost
	I0917 10:41:43.477363    4462 start.go:83] releasing machines lock for "offline-docker-349000", held for 2.613242833s
	W0917 10:41:43.477779    4462 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-349000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-349000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:43.486475    4462 out.go:201] 
	W0917 10:41:43.491573    4462 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:41:43.491618    4462 out.go:270] * 
	* 
	W0917 10:41:43.494416    4462 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:41:43.504472    4462 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-349000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-17 10:41:43.519555 -0700 PDT m=+2782.543102751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-349000 -n offline-docker-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-349000 -n offline-docker-349000: exit status 7 (65.858209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-349000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-349000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-349000
--- FAIL: TestOffline (10.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.177333ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-zhs2b" [d93d54e8-7ff9-4034-a317-f6c97924ce18] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004307834s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5fb54" [f61a3ff0-e6a6-463d-8803-ff49ba95d4f4] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007925041s
addons_test.go:342: (dbg) Run:  kubectl --context addons-439000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-439000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-439000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.052639375s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-439000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 ip
2024/09/17 10:08:55 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-439000 -n addons-439000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-345000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |                     |
	|         | -p download-only-345000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| delete  | -p download-only-345000              | download-only-345000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| start   | -o=json --download-only              | download-only-470000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |                     |
	|         | -p download-only-470000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| delete  | -p download-only-470000              | download-only-470000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| delete  | -p download-only-345000              | download-only-345000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| delete  | -p download-only-470000              | download-only-470000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| start   | --download-only -p                   | binary-mirror-006000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |                     |
	|         | binary-mirror-006000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49313               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-006000              | binary-mirror-006000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| addons  | enable dashboard -p                  | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |                     |
	|         | addons-439000                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |                     |
	|         | addons-439000                        |                      |         |         |                     |                     |
	| start   | -p addons-439000 --wait=true         | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:59 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-439000 addons disable         | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 09:59 PDT | 17 Sep 24 09:59 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-439000 addons                 | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-439000 addons                 | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-439000 addons                 | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | addons-439000                        |                      |         |         |                     |                     |
	| ssh     | addons-439000 ssh curl -s            | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| ip      | addons-439000 ip                     | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	| addons  | addons-439000 addons disable         | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | ingress-dns --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| ip      | addons-439000 ip                     | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	| addons  | addons-439000 addons disable         | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-439000 addons disable         | addons-439000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT |                     |
	|         | ingress --alsologtostderr -v=1       |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 09:55:46
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 09:55:46.171004    1927 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:55:46.171110    1927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:55:46.171114    1927 out.go:358] Setting ErrFile to fd 2...
	I0917 09:55:46.171116    1927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:55:46.171255    1927 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 09:55:46.172535    1927 out.go:352] Setting JSON to false
	I0917 09:55:46.189099    1927 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1509,"bootTime":1726590637,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 09:55:46.189199    1927 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 09:55:46.193072    1927 out.go:177] * [addons-439000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 09:55:46.200133    1927 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 09:55:46.200172    1927 notify.go:220] Checking for updates...
	I0917 09:55:46.207104    1927 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 09:55:46.210116    1927 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 09:55:46.213078    1927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 09:55:46.216085    1927 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 09:55:46.219156    1927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 09:55:46.220665    1927 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 09:55:46.225073    1927 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 09:55:46.231929    1927 start.go:297] selected driver: qemu2
	I0917 09:55:46.231935    1927 start.go:901] validating driver "qemu2" against <nil>
	I0917 09:55:46.231941    1927 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 09:55:46.234329    1927 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 09:55:46.237062    1927 out.go:177] * Automatically selected the socket_vmnet network
	I0917 09:55:46.240147    1927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 09:55:46.240164    1927 cni.go:84] Creating CNI manager for ""
	I0917 09:55:46.240187    1927 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 09:55:46.240191    1927 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 09:55:46.240242    1927 start.go:340] cluster config:
	{Name:addons-439000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-439000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:55:46.244160    1927 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 09:55:46.253122    1927 out.go:177] * Starting "addons-439000" primary control-plane node in "addons-439000" cluster
	I0917 09:55:46.257122    1927 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 09:55:46.257146    1927 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 09:55:46.257154    1927 cache.go:56] Caching tarball of preloaded images
	I0917 09:55:46.257218    1927 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 09:55:46.257224    1927 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 09:55:46.257446    1927 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/config.json ...
	I0917 09:55:46.257458    1927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/config.json: {Name:mk8d37b4694104f52e3efe56c28f5fa274bd8571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:55:46.257767    1927 start.go:360] acquireMachinesLock for addons-439000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 09:55:46.257837    1927 start.go:364] duration metric: took 63.084µs to acquireMachinesLock for "addons-439000"
	I0917 09:55:46.257847    1927 start.go:93] Provisioning new machine with config: &{Name:addons-439000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-439000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 09:55:46.257892    1927 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 09:55:46.265111    1927 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0917 09:55:46.507764    1927 start.go:159] libmachine.API.Create for "addons-439000" (driver="qemu2")
	I0917 09:55:46.507820    1927 client.go:168] LocalClient.Create starting
	I0917 09:55:46.507999    1927 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 09:55:46.647042    1927 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 09:55:46.865723    1927 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 09:55:47.373251    1927 main.go:141] libmachine: Creating SSH key...
	I0917 09:55:47.668436    1927 main.go:141] libmachine: Creating Disk image...
	I0917 09:55:47.668458    1927 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 09:55:47.668752    1927 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/disk.qcow2
	I0917 09:55:47.689901    1927 main.go:141] libmachine: STDOUT: 
	I0917 09:55:47.689931    1927 main.go:141] libmachine: STDERR: 
	I0917 09:55:47.690000    1927 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/disk.qcow2 +20000M
	I0917 09:55:47.698528    1927 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 09:55:47.698546    1927 main.go:141] libmachine: STDERR: 
	I0917 09:55:47.698560    1927 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/disk.qcow2
	I0917 09:55:47.698565    1927 main.go:141] libmachine: Starting QEMU VM...
	I0917 09:55:47.698602    1927 qemu.go:418] Using hvf for hardware acceleration
	I0917 09:55:47.698635    1927 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:13:86:28:f3:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/disk.qcow2
	I0917 09:55:47.757460    1927 main.go:141] libmachine: STDOUT: 
	I0917 09:55:47.757498    1927 main.go:141] libmachine: STDERR: 
	I0917 09:55:47.757501    1927 main.go:141] libmachine: Attempt 0
	I0917 09:55:47.757516    1927 main.go:141] libmachine: Searching for e:13:86:28:f3:3b in /var/db/dhcpd_leases ...
	I0917 09:55:47.757576    1927 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 09:55:47.757596    1927 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eb05cd}
	I0917 09:55:49.759738    1927 main.go:141] libmachine: Attempt 1
	I0917 09:55:49.759854    1927 main.go:141] libmachine: Searching for e:13:86:28:f3:3b in /var/db/dhcpd_leases ...
	I0917 09:55:49.760296    1927 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 09:55:49.760347    1927 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eb05cd}
	I0917 09:55:51.762591    1927 main.go:141] libmachine: Attempt 2
	I0917 09:55:51.762731    1927 main.go:141] libmachine: Searching for e:13:86:28:f3:3b in /var/db/dhcpd_leases ...
	I0917 09:55:51.763026    1927 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 09:55:51.763078    1927 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eb05cd}
	I0917 09:55:53.765222    1927 main.go:141] libmachine: Attempt 3
	I0917 09:55:53.765254    1927 main.go:141] libmachine: Searching for e:13:86:28:f3:3b in /var/db/dhcpd_leases ...
	I0917 09:55:53.765323    1927 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 09:55:53.765350    1927 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eb05cd}
	I0917 09:55:55.767361    1927 main.go:141] libmachine: Attempt 4
	I0917 09:55:55.767374    1927 main.go:141] libmachine: Searching for e:13:86:28:f3:3b in /var/db/dhcpd_leases ...
	I0917 09:55:55.767406    1927 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 09:55:55.767421    1927 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eb05cd}
	I0917 09:55:57.769418    1927 main.go:141] libmachine: Attempt 5
	I0917 09:55:57.769427    1927 main.go:141] libmachine: Searching for e:13:86:28:f3:3b in /var/db/dhcpd_leases ...
	I0917 09:55:57.769466    1927 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 09:55:57.769474    1927 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eb05cd}
	I0917 09:55:59.771502    1927 main.go:141] libmachine: Attempt 6
	I0917 09:55:59.771523    1927 main.go:141] libmachine: Searching for e:13:86:28:f3:3b in /var/db/dhcpd_leases ...
	I0917 09:55:59.771591    1927 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 09:55:59.771600    1927 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66eb05cd}
	I0917 09:56:01.773615    1927 main.go:141] libmachine: Attempt 7
	I0917 09:56:01.773640    1927 main.go:141] libmachine: Searching for e:13:86:28:f3:3b in /var/db/dhcpd_leases ...
	I0917 09:56:01.773768    1927 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0917 09:56:01.773783    1927 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e:13:86:28:f3:3b ID:1,e:13:86:28:f3:3b Lease:0x66eb0620}
	I0917 09:56:01.773803    1927 main.go:141] libmachine: Found match: e:13:86:28:f3:3b
	I0917 09:56:01.773813    1927 main.go:141] libmachine: IP: 192.168.105.2
	I0917 09:56:01.773818    1927 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0917 09:56:02.793242    1927 machine.go:93] provisionDockerMachine start ...
	I0917 09:56:02.794025    1927 main.go:141] libmachine: Using SSH client type: native
	I0917 09:56:02.794429    1927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102991190] 0x1029939d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 09:56:02.794443    1927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 09:56:02.867583    1927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 09:56:02.867614    1927 buildroot.go:166] provisioning hostname "addons-439000"
	I0917 09:56:02.867787    1927 main.go:141] libmachine: Using SSH client type: native
	I0917 09:56:02.868051    1927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102991190] 0x1029939d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 09:56:02.868063    1927 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-439000 && echo "addons-439000" | sudo tee /etc/hostname
	I0917 09:56:02.932979    1927 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-439000
	
	I0917 09:56:02.933057    1927 main.go:141] libmachine: Using SSH client type: native
	I0917 09:56:02.933201    1927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102991190] 0x1029939d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 09:56:02.933212    1927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-439000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-439000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-439000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 09:56:02.989229    1927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 09:56:02.989241    1927 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1312/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1312/.minikube}
	I0917 09:56:02.989250    1927 buildroot.go:174] setting up certificates
	I0917 09:56:02.989255    1927 provision.go:84] configureAuth start
	I0917 09:56:02.989262    1927 provision.go:143] copyHostCerts
	I0917 09:56:02.989373    1927 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1312/.minikube/key.pem (1679 bytes)
	I0917 09:56:02.990297    1927 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.pem (1078 bytes)
	I0917 09:56:02.990435    1927 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1312/.minikube/cert.pem (1123 bytes)
	I0917 09:56:02.990659    1927 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca-key.pem org=jenkins.addons-439000 san=[127.0.0.1 192.168.105.2 addons-439000 localhost minikube]
	I0917 09:56:03.039893    1927 provision.go:177] copyRemoteCerts
	I0917 09:56:03.039949    1927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 09:56:03.039957    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:03.067254    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 09:56:03.075586    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 09:56:03.083927    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 09:56:03.092345    1927 provision.go:87] duration metric: took 103.071833ms to configureAuth
	I0917 09:56:03.092355    1927 buildroot.go:189] setting minikube options for container-runtime
	I0917 09:56:03.093659    1927 config.go:182] Loaded profile config "addons-439000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 09:56:03.093700    1927 main.go:141] libmachine: Using SSH client type: native
	I0917 09:56:03.093794    1927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102991190] 0x1029939d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 09:56:03.093799    1927 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 09:56:03.141735    1927 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 09:56:03.141750    1927 buildroot.go:70] root file system type: tmpfs
	I0917 09:56:03.141810    1927 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 09:56:03.141856    1927 main.go:141] libmachine: Using SSH client type: native
	I0917 09:56:03.141952    1927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102991190] 0x1029939d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 09:56:03.141993    1927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 09:56:03.193769    1927 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 09:56:03.193821    1927 main.go:141] libmachine: Using SSH client type: native
	I0917 09:56:03.193935    1927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102991190] 0x1029939d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 09:56:03.193943    1927 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 09:56:04.560481    1927 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 09:56:04.560494    1927 machine.go:96] duration metric: took 1.767252834s to provisionDockerMachine
	I0917 09:56:04.560500    1927 client.go:171] duration metric: took 18.052982792s to LocalClient.Create
	I0917 09:56:04.560515    1927 start.go:167] duration metric: took 18.053073166s to libmachine.API.Create "addons-439000"
	I0917 09:56:04.560522    1927 start.go:293] postStartSetup for "addons-439000" (driver="qemu2")
	I0917 09:56:04.560527    1927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 09:56:04.560605    1927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 09:56:04.560615    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:04.589115    1927 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 09:56:04.590822    1927 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 09:56:04.590832    1927 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1312/.minikube/addons for local assets ...
	I0917 09:56:04.590927    1927 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1312/.minikube/files for local assets ...
	I0917 09:56:04.590957    1927 start.go:296] duration metric: took 30.43325ms for postStartSetup
	I0917 09:56:04.591359    1927 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/config.json ...
	I0917 09:56:04.591551    1927 start.go:128] duration metric: took 18.333968583s to createHost
	I0917 09:56:04.591578    1927 main.go:141] libmachine: Using SSH client type: native
	I0917 09:56:04.591671    1927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102991190] 0x1029939d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 09:56:04.591676    1927 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 09:56:04.641020    1927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726592164.844652835
	
	I0917 09:56:04.641030    1927 fix.go:216] guest clock: 1726592164.844652835
	I0917 09:56:04.641034    1927 fix.go:229] Guest: 2024-09-17 09:56:04.844652835 -0700 PDT Remote: 2024-09-17 09:56:04.591554 -0700 PDT m=+18.439588418 (delta=253.098835ms)
	I0917 09:56:04.641045    1927 fix.go:200] guest clock delta is within tolerance: 253.098835ms
	I0917 09:56:04.641048    1927 start.go:83] releasing machines lock for "addons-439000", held for 18.383520416s
	I0917 09:56:04.641371    1927 ssh_runner.go:195] Run: cat /version.json
	I0917 09:56:04.641375    1927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 09:56:04.641379    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:04.641410    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:04.716362    1927 ssh_runner.go:195] Run: systemctl --version
	I0917 09:56:04.719423    1927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 09:56:04.721578    1927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 09:56:04.721616    1927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 09:56:04.727786    1927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 09:56:04.727794    1927 start.go:495] detecting cgroup driver to use...
	I0917 09:56:04.727911    1927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 09:56:04.734540    1927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 09:56:04.738054    1927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 09:56:04.741438    1927 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 09:56:04.741465    1927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 09:56:04.745054    1927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 09:56:04.748781    1927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 09:56:04.752898    1927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 09:56:04.756665    1927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 09:56:04.760497    1927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 09:56:04.764380    1927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 09:56:04.768356    1927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 09:56:04.772483    1927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 09:56:04.776477    1927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 09:56:04.780251    1927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:56:04.846118    1927 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 09:56:04.852917    1927 start.go:495] detecting cgroup driver to use...
	I0917 09:56:04.852979    1927 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 09:56:04.860904    1927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 09:56:04.866818    1927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 09:56:04.875601    1927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 09:56:04.881018    1927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 09:56:04.886402    1927 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 09:56:04.923898    1927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 09:56:04.929917    1927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 09:56:04.936430    1927 ssh_runner.go:195] Run: which cri-dockerd
	I0917 09:56:04.937846    1927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 09:56:04.940978    1927 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 09:56:04.946928    1927 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 09:56:05.014361    1927 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 09:56:05.081320    1927 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 09:56:05.081384    1927 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 09:56:05.087576    1927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:56:05.154935    1927 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 09:56:07.328982    1927 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.174063917s)
	I0917 09:56:07.329056    1927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 09:56:07.334599    1927 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 09:56:07.341119    1927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 09:56:07.346709    1927 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 09:56:07.413722    1927 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 09:56:07.487004    1927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:56:07.557688    1927 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 09:56:07.564482    1927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 09:56:07.569920    1927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:56:07.630141    1927 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 09:56:07.656302    1927 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 09:56:07.656395    1927 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 09:56:07.658758    1927 start.go:563] Will wait 60s for crictl version
	I0917 09:56:07.658796    1927 ssh_runner.go:195] Run: which crictl
	I0917 09:56:07.660285    1927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 09:56:07.678772    1927 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 09:56:07.678866    1927 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 09:56:07.691328    1927 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 09:56:07.702455    1927 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 09:56:07.702540    1927 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0917 09:56:07.704084    1927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 09:56:07.708466    1927 kubeadm.go:883] updating cluster {Name:addons-439000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-439000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 09:56:07.708511    1927 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 09:56:07.708557    1927 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 09:56:07.718197    1927 docker.go:685] Got preloaded images: 
	I0917 09:56:07.718205    1927 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0917 09:56:07.718259    1927 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 09:56:07.721639    1927 ssh_runner.go:195] Run: which lz4
	I0917 09:56:07.722988    1927 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 09:56:07.724314    1927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 09:56:07.724323    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0917 09:56:08.981041    1927 docker.go:649] duration metric: took 1.25811225s to copy over tarball
	I0917 09:56:08.981132    1927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 09:56:09.937339    1927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 09:56:09.952059    1927 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 09:56:09.955921    1927 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0917 09:56:09.961893    1927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:56:10.027116    1927 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 09:56:12.229704    1927 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.202607167s)
	I0917 09:56:12.229821    1927 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 09:56:12.235964    1927 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 09:56:12.235975    1927 cache_images.go:84] Images are preloaded, skipping loading
	I0917 09:56:12.235980    1927 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0917 09:56:12.236060    1927 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-439000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-439000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 09:56:12.236128    1927 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 09:56:12.256902    1927 cni.go:84] Creating CNI manager for ""
	I0917 09:56:12.256917    1927 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 09:56:12.256936    1927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 09:56:12.256947    1927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-439000 NodeName:addons-439000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 09:56:12.257004    1927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-439000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 09:56:12.257068    1927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 09:56:12.260772    1927 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 09:56:12.260812    1927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 09:56:12.264295    1927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 09:56:12.270236    1927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 09:56:12.276168    1927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0917 09:56:12.282183    1927 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0917 09:56:12.283473    1927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 09:56:12.288007    1927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:56:12.382739    1927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 09:56:12.390079    1927 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000 for IP: 192.168.105.2
	I0917 09:56:12.390088    1927 certs.go:194] generating shared ca certs ...
	I0917 09:56:12.390097    1927 certs.go:226] acquiring lock for ca certs: {Name:mk1d9837d65f8f1762ad8daf2cfbb53face1f201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:12.390302    1927 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.key
	I0917 09:56:12.473134    1927 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt ...
	I0917 09:56:12.473145    1927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt: {Name:mk30feeaff8a38602c1fcdce752fac8a3f7d001f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:12.473435    1927 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.key ...
	I0917 09:56:12.473439    1927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.key: {Name:mk5dfc8a5b016bc5c7d804577fce8166c8800ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:12.473577    1927 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.key
	I0917 09:56:12.544422    1927 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.crt ...
	I0917 09:56:12.544429    1927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.crt: {Name:mkb9764e6708ae81a45e1e6a55cfb8e4989ac8a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:12.544572    1927 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.key ...
	I0917 09:56:12.544575    1927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.key: {Name:mk183e6dbc543001b5df2154015cda158ef6f9dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:12.544697    1927 certs.go:256] generating profile certs ...
	I0917 09:56:12.544728    1927 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.key
	I0917 09:56:12.544736    1927 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt with IP's: []
	I0917 09:56:12.614134    1927 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt ...
	I0917 09:56:12.614138    1927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: {Name:mka3f464cbbcbc61fffe6fbc752b9993399d190f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:12.614273    1927 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.key ...
	I0917 09:56:12.614275    1927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.key: {Name:mk24b2e77ad64f9b33f94f67234a8e9fe5210168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:12.614388    1927 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/apiserver.key.ce722006
	I0917 09:56:12.614400    1927 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/apiserver.crt.ce722006 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0917 09:56:12.785506    1927 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/apiserver.crt.ce722006 ...
	I0917 09:56:12.785510    1927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/apiserver.crt.ce722006: {Name:mk7631ca1a7439687ef18935e28fdfe455926efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:12.785678    1927 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/apiserver.key.ce722006 ...
	I0917 09:56:12.785682    1927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/apiserver.key.ce722006: {Name:mk568b0e5970826264ef78f29e439a8d69b09028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:12.785799    1927 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/apiserver.crt.ce722006 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/apiserver.crt
	I0917 09:56:12.786093    1927 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/apiserver.key.ce722006 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/apiserver.key
	I0917 09:56:12.786229    1927 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/proxy-client.key
	I0917 09:56:12.786242    1927 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/proxy-client.crt with IP's: []
	I0917 09:56:12.843789    1927 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/proxy-client.crt ...
	I0917 09:56:12.843794    1927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/proxy-client.crt: {Name:mke94c07aac795e7cfbeb78c404629347a5dbf55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:12.843956    1927 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/proxy-client.key ...
	I0917 09:56:12.843959    1927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/proxy-client.key: {Name:mk5ace82f0ab4810168dbe6c8244ce6c452704f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:12.844239    1927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 09:56:12.844266    1927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem (1078 bytes)
	I0917 09:56:12.844290    1927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem (1123 bytes)
	I0917 09:56:12.844607    1927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/key.pem (1679 bytes)
	I0917 09:56:12.845207    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 09:56:12.854465    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 09:56:12.862956    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 09:56:12.871161    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 09:56:12.879734    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 09:56:12.888260    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 09:56:12.896629    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 09:56:12.904788    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 09:56:12.913112    1927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 09:56:12.921185    1927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 09:56:12.928023    1927 ssh_runner.go:195] Run: openssl version
	I0917 09:56:12.930488    1927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 09:56:12.934133    1927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 09:56:12.935738    1927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 09:56:12.935760    1927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 09:56:12.938027    1927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 09:56:12.941602    1927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 09:56:12.943058    1927 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 09:56:12.943098    1927 kubeadm.go:392] StartCluster: {Name:addons-439000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-439000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:56:12.943175    1927 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 09:56:12.948324    1927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 09:56:12.952190    1927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 09:56:12.955767    1927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 09:56:12.959333    1927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 09:56:12.959338    1927 kubeadm.go:157] found existing configuration files:
	
	I0917 09:56:12.959363    1927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 09:56:12.962601    1927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 09:56:12.962630    1927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 09:56:12.965859    1927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 09:56:12.968861    1927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 09:56:12.968886    1927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 09:56:12.972376    1927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 09:56:12.976014    1927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 09:56:12.976041    1927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 09:56:12.979545    1927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 09:56:12.982885    1927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 09:56:12.982911    1927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 09:56:12.985933    1927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 09:56:13.007526    1927 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 09:56:13.007556    1927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 09:56:13.051370    1927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 09:56:13.051420    1927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 09:56:13.051476    1927 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 09:56:13.055172    1927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 09:56:13.064344    1927 out.go:235]   - Generating certificates and keys ...
	I0917 09:56:13.064395    1927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 09:56:13.064423    1927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 09:56:13.083719    1927 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 09:56:13.207611    1927 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 09:56:13.298560    1927 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 09:56:13.341062    1927 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 09:56:13.459800    1927 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 09:56:13.459867    1927 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-439000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0917 09:56:13.581302    1927 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 09:56:13.581371    1927 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-439000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0917 09:56:13.714194    1927 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 09:56:13.803023    1927 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 09:56:13.927420    1927 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 09:56:13.927462    1927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 09:56:14.038771    1927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 09:56:14.264306    1927 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 09:56:14.438661    1927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 09:56:14.474679    1927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 09:56:14.695854    1927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 09:56:14.696012    1927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 09:56:14.697206    1927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 09:56:14.704414    1927 out.go:235]   - Booting up control plane ...
	I0917 09:56:14.704478    1927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 09:56:14.704528    1927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 09:56:14.704580    1927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 09:56:14.707750    1927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 09:56:14.710338    1927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 09:56:14.710363    1927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 09:56:14.786402    1927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 09:56:14.786505    1927 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 09:56:15.289370    1927 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.2425ms
	I0917 09:56:15.289624    1927 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 09:56:18.293147    1927 kubeadm.go:310] [api-check] The API server is healthy after 3.003346585s
	I0917 09:56:18.314810    1927 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 09:56:18.326644    1927 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 09:56:18.341551    1927 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 09:56:18.341741    1927 kubeadm.go:310] [mark-control-plane] Marking the node addons-439000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 09:56:18.347100    1927 kubeadm.go:310] [bootstrap-token] Using token: kc62qo.xh3pqncs3nh40dpm
	I0917 09:56:18.350556    1927 out.go:235]   - Configuring RBAC rules ...
	I0917 09:56:18.350653    1927 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 09:56:18.351790    1927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 09:56:18.358732    1927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 09:56:18.359977    1927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 09:56:18.361261    1927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 09:56:18.362876    1927 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 09:56:18.706640    1927 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 09:56:19.106267    1927 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 09:56:19.704325    1927 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 09:56:19.705415    1927 kubeadm.go:310] 
	I0917 09:56:19.705515    1927 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 09:56:19.705533    1927 kubeadm.go:310] 
	I0917 09:56:19.705632    1927 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 09:56:19.705646    1927 kubeadm.go:310] 
	I0917 09:56:19.705678    1927 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 09:56:19.705939    1927 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 09:56:19.706012    1927 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 09:56:19.706020    1927 kubeadm.go:310] 
	I0917 09:56:19.706122    1927 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 09:56:19.706130    1927 kubeadm.go:310] 
	I0917 09:56:19.706219    1927 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 09:56:19.706260    1927 kubeadm.go:310] 
	I0917 09:56:19.706313    1927 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 09:56:19.706400    1927 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 09:56:19.706522    1927 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 09:56:19.706531    1927 kubeadm.go:310] 
	I0917 09:56:19.706610    1927 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 09:56:19.706691    1927 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 09:56:19.706697    1927 kubeadm.go:310] 
	I0917 09:56:19.706808    1927 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kc62qo.xh3pqncs3nh40dpm \
	I0917 09:56:19.706946    1927 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:36041a92e029834f33dc421547a4417b75c39ebfd82ce914924ecffa9817b69d \
	I0917 09:56:19.706972    1927 kubeadm.go:310] 	--control-plane 
	I0917 09:56:19.706976    1927 kubeadm.go:310] 
	I0917 09:56:19.707062    1927 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 09:56:19.707071    1927 kubeadm.go:310] 
	I0917 09:56:19.707188    1927 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kc62qo.xh3pqncs3nh40dpm \
	I0917 09:56:19.707328    1927 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:36041a92e029834f33dc421547a4417b75c39ebfd82ce914924ecffa9817b69d 
	I0917 09:56:19.708852    1927 kubeadm.go:310] W0917 16:56:13.210309    1579 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 09:56:19.709166    1927 kubeadm.go:310] W0917 16:56:13.210617    1579 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 09:56:19.709278    1927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 09:56:19.709300    1927 cni.go:84] Creating CNI manager for ""
	I0917 09:56:19.709320    1927 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 09:56:19.713305    1927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 09:56:19.721435    1927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 09:56:19.728464    1927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 09:56:19.737439    1927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 09:56:19.737504    1927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:19.737541    1927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-439000 minikube.k8s.io/updated_at=2024_09_17T09_56_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=addons-439000 minikube.k8s.io/primary=true
	I0917 09:56:19.804971    1927 ops.go:34] apiserver oom_adj: -16
	I0917 09:56:19.805013    1927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:20.305496    1927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:20.807095    1927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:21.307010    1927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:21.807121    1927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:22.305435    1927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:22.807065    1927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:23.305234    1927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:23.805791    1927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:24.306518    1927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:24.343954    1927 kubeadm.go:1113] duration metric: took 4.606577625s to wait for elevateKubeSystemPrivileges
	I0917 09:56:24.343972    1927 kubeadm.go:394] duration metric: took 11.401070959s to StartCluster
	I0917 09:56:24.343983    1927 settings.go:142] acquiring lock: {Name:mk01dda79792b7eaa96d8ee72bfae59b39d5fab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:24.344160    1927 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 09:56:24.344349    1927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/kubeconfig: {Name:mk31f3a4e5ba5b55f1c245ae17bd3947ee606141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:24.344594    1927 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 09:56:24.344603    1927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 09:56:24.344627    1927 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 09:56:24.344679    1927 addons.go:69] Setting yakd=true in profile "addons-439000"
	I0917 09:56:24.344686    1927 addons.go:234] Setting addon yakd=true in "addons-439000"
	I0917 09:56:24.344700    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.344734    1927 addons.go:69] Setting inspektor-gadget=true in profile "addons-439000"
	I0917 09:56:24.344746    1927 addons.go:234] Setting addon inspektor-gadget=true in "addons-439000"
	I0917 09:56:24.344744    1927 addons.go:69] Setting cloud-spanner=true in profile "addons-439000"
	I0917 09:56:24.344754    1927 addons.go:69] Setting storage-provisioner=true in profile "addons-439000"
	I0917 09:56:24.344761    1927 addons.go:69] Setting ingress=true in profile "addons-439000"
	I0917 09:56:24.344765    1927 addons.go:234] Setting addon storage-provisioner=true in "addons-439000"
	I0917 09:56:24.344766    1927 addons.go:234] Setting addon cloud-spanner=true in "addons-439000"
	I0917 09:56:24.344774    1927 addons.go:69] Setting gcp-auth=true in profile "addons-439000"
	I0917 09:56:24.344779    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.344781    1927 mustload.go:65] Loading cluster: addons-439000
	I0917 09:56:24.344799    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.344830    1927 config.go:182] Loaded profile config "addons-439000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 09:56:24.344865    1927 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-439000"
	I0917 09:56:24.344858    1927 addons.go:69] Setting default-storageclass=true in profile "addons-439000"
	I0917 09:56:24.344875    1927 addons.go:69] Setting volcano=true in profile "addons-439000"
	I0917 09:56:24.344879    1927 addons.go:69] Setting registry=true in profile "addons-439000"
	I0917 09:56:24.344883    1927 addons.go:234] Setting addon registry=true in "addons-439000"
	I0917 09:56:24.344884    1927 addons.go:234] Setting addon volcano=true in "addons-439000"
	I0917 09:56:24.344886    1927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-439000"
	I0917 09:56:24.344892    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.344893    1927 addons.go:69] Setting volumesnapshots=true in profile "addons-439000"
	I0917 09:56:24.344898    1927 addons.go:234] Setting addon volumesnapshots=true in "addons-439000"
	I0917 09:56:24.344903    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.344869    1927 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-439000"
	I0917 09:56:24.344999    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.344760    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.345220    1927 retry.go:31] will retry after 566.541722ms: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.344868    1927 config.go:182] Loaded profile config "addons-439000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 09:56:24.345278    1927 retry.go:31] will retry after 1.170707803s: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.344769    1927 addons.go:234] Setting addon ingress=true in "addons-439000"
	I0917 09:56:24.345293    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.344873    1927 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-439000"
	I0917 09:56:24.345225    1927 retry.go:31] will retry after 1.088106551s: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.345335    1927 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-439000"
	I0917 09:56:24.345348    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.345348    1927 retry.go:31] will retry after 865.087118ms: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.344764    1927 addons.go:69] Setting ingress-dns=true in profile "addons-439000"
	I0917 09:56:24.345356    1927 addons.go:234] Setting addon ingress-dns=true in "addons-439000"
	I0917 09:56:24.345363    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.345374    1927 retry.go:31] will retry after 579.644449ms: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.344876    1927 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-439000"
	I0917 09:56:24.345381    1927 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-439000"
	I0917 09:56:24.345443    1927 retry.go:31] will retry after 572.027731ms: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.344890    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.345499    1927 retry.go:31] will retry after 742.585308ms: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.345515    1927 retry.go:31] will retry after 1.141489784s: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.344872    1927 addons.go:69] Setting metrics-server=true in profile "addons-439000"
	I0917 09:56:24.345545    1927 addons.go:234] Setting addon metrics-server=true in "addons-439000"
	I0917 09:56:24.345552    1927 retry.go:31] will retry after 1.180388155s: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.345555    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.345443    1927 retry.go:31] will retry after 573.30143ms: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.345634    1927 retry.go:31] will retry after 1.370251789s: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.345746    1927 retry.go:31] will retry after 940.327089ms: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.345791    1927 retry.go:31] will retry after 780.270103ms: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.349122    1927 out.go:177] * Verifying Kubernetes components...
	I0917 09:56:24.356057    1927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 09:56:24.360110    1927 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 09:56:24.360175    1927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:56:24.364165    1927 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 09:56:24.364171    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 09:56:24.364178    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:24.368046    1927 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 09:56:24.368055    1927 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 09:56:24.368064    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:24.402663    1927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 09:56:24.484730    1927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 09:56:24.499094    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 09:56:24.538331    1927 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 09:56:24.538342    1927 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 09:56:24.562114    1927 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 09:56:24.562126    1927 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 09:56:24.571560    1927 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 09:56:24.571572    1927 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 09:56:24.587955    1927 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 09:56:24.587969    1927 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 09:56:24.594994    1927 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 09:56:24.595009    1927 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 09:56:24.608848    1927 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 09:56:24.608865    1927 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 09:56:24.616214    1927 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 09:56:24.616222    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 09:56:24.623972    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 09:56:24.662939    1927 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0917 09:56:24.665362    1927 node_ready.go:35] waiting up to 6m0s for node "addons-439000" to be "Ready" ...
	I0917 09:56:24.670808    1927 node_ready.go:49] node "addons-439000" has status "Ready":"True"
	I0917 09:56:24.670826    1927 node_ready.go:38] duration metric: took 5.442ms for node "addons-439000" to be "Ready" ...
	I0917 09:56:24.670831    1927 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 09:56:24.675467    1927 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace to be "Ready" ...
	I0917 09:56:24.933085    1927 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 09:56:24.933210    1927 retry.go:31] will retry after 1.139088842s: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0917 09:56:24.934120    1927 addons.go:234] Setting addon default-storageclass=true in "addons-439000"
	I0917 09:56:24.934138    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:24.936732    1927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 09:56:24.936767    1927 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 09:56:24.936783    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:24.937297    1927 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 09:56:24.937302    1927 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 09:56:24.937307    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:24.942709    1927 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 09:56:24.947147    1927 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 09:56:24.947158    1927 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 09:56:24.947169    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:24.997929    1927 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 09:56:24.997944    1927 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 09:56:25.006731    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 09:56:25.017652    1927 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 09:56:25.017665    1927 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 09:56:25.031029    1927 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 09:56:25.031042    1927 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 09:56:25.057034    1927 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 09:56:25.057050    1927 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 09:56:25.057239    1927 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 09:56:25.057243    1927 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 09:56:25.084855    1927 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 09:56:25.084868    1927 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 09:56:25.084973    1927 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 09:56:25.084977    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 09:56:25.089367    1927 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-439000"
	I0917 09:56:25.089388    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:25.093653    1927 out.go:177]   - Using image docker.io/busybox:stable
	I0917 09:56:25.097764    1927 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 09:56:25.101791    1927 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 09:56:25.101802    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 09:56:25.101813    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:25.102121    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 09:56:25.115828    1927 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 09:56:25.115841    1927 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 09:56:25.130703    1927 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 09:56:25.133774    1927 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 09:56:25.137766    1927 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 09:56:25.137774    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 09:56:25.137785    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:25.162798    1927 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 09:56:25.162808    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 09:56:25.172793    1927 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-439000" context rescaled to 1 replicas
	I0917 09:56:25.209014    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 09:56:25.213283    1927 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 09:56:25.217745    1927 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 09:56:25.217757    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 09:56:25.217769    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:25.218094    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 09:56:25.263653    1927 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 09:56:25.263665    1927 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 09:56:25.290827    1927 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 09:56:25.294794    1927 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 09:56:25.294808    1927 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 09:56:25.294818    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:25.327614    1927 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 09:56:25.327623    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 09:56:25.407608    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 09:56:25.428727    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 09:56:25.438259    1927 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 09:56:25.442146    1927 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 09:56:25.442156    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 09:56:25.442167    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:25.451656    1927 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 09:56:25.451665    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 09:56:25.492059    1927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 09:56:25.496115    1927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 09:56:25.500129    1927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 09:56:25.504207    1927 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 09:56:25.504219    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 09:56:25.504230    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:25.522074    1927 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0917 09:56:25.526109    1927 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0917 09:56:25.530104    1927 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0917 09:56:25.534503    1927 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 09:56:25.534512    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0917 09:56:25.534523    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:25.538005    1927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 09:56:25.542084    1927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 09:56:25.546168    1927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 09:56:25.550118    1927 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 09:56:25.557563    1927 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 09:56:25.557574    1927 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 09:56:25.557914    1927 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 09:56:25.565102    1927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 09:56:25.569168    1927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 09:56:25.573098    1927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 09:56:25.577065    1927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 09:56:25.577075    1927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 09:56:25.577086    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:25.627467    1927 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 09:56:25.627476    1927 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 09:56:25.659560    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 09:56:25.704877    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 09:56:25.704928    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 09:56:25.709454    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 09:56:25.721163    1927 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 09:56:25.725140    1927 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 09:56:25.725150    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 09:56:25.725161    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:25.736807    1927 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 09:56:25.736820    1927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 09:56:25.773360    1927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 09:56:25.773374    1927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 09:56:25.797678    1927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 09:56:25.797696    1927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 09:56:25.888634    1927 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-439000 service yakd-dashboard -n yakd-dashboard
	
	I0917 09:56:25.888950    1927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 09:56:25.888962    1927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 09:56:25.970512    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 09:56:25.973222    1927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 09:56:25.973233    1927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 09:56:26.009887    1927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 09:56:26.009896    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 09:56:26.074812    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:26.169935    1927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 09:56:26.169950    1927 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 09:56:26.253881    1927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 09:56:26.253891    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 09:56:26.365632    1927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 09:56:26.365643    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 09:56:26.440935    1927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.231919833s)
	I0917 09:56:26.530393    1927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 09:56:26.530407    1927 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 09:56:26.639780    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 09:56:26.680932    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:26.877301    1927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.469696959s)
	I0917 09:56:26.877318    1927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.448603709s)
	I0917 09:56:26.877334    1927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.217787291s)
	I0917 09:56:26.877339    1927 addons.go:475] Verifying addon registry=true in "addons-439000"
	I0917 09:56:26.877452    1927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.659352417s)
	W0917 09:56:26.877476    1927 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 09:56:26.877485    1927 retry.go:31] will retry after 178.376747ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 09:56:26.883654    1927 out.go:177] * Verifying registry addon...
	I0917 09:56:26.891054    1927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 09:56:26.901536    1927 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 09:56:26.901548    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:27.058013    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 09:56:27.396377    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:27.899657    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:28.395008    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:28.687218    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:28.931542    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:29.436384    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:29.474660    1927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.769782458s)
	I0917 09:56:29.474795    1927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.769968125s)
	I0917 09:56:29.474804    1927 addons.go:475] Verifying addon metrics-server=true in "addons-439000"
	I0917 09:56:29.474840    1927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.765441416s)
	I0917 09:56:29.474849    1927 addons.go:475] Verifying addon ingress=true in "addons-439000"
	I0917 09:56:29.474928    1927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.504398875s)
	I0917 09:56:29.475113    1927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.835359s)
	I0917 09:56:29.475120    1927 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-439000"
	I0917 09:56:29.475143    1927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.417147958s)
	I0917 09:56:29.487944    1927 out.go:177] * Verifying ingress addon...
	I0917 09:56:29.492433    1927 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 09:56:29.503006    1927 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 09:56:29.509896    1927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 09:56:29.529533    1927 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 09:56:29.529544    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:29.529633    1927 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 09:56:29.529640    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:29.895272    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:30.007345    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:30.014204    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:30.395141    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:30.507441    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:30.512985    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:30.894095    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:31.007759    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:31.013336    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:31.179879    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:31.397068    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:31.508355    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:31.513302    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:31.895275    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:32.006953    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:32.013374    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:32.395877    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:32.507501    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:32.512685    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:32.894691    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:33.007300    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:33.012918    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:33.181047    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:33.394968    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:33.506920    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:33.512646    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:33.894491    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:34.072398    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:34.072468    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:34.434350    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:34.483235    1927 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 09:56:34.483259    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:34.513201    1927 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 09:56:34.519415    1927 addons.go:234] Setting addon gcp-auth=true in "addons-439000"
	I0917 09:56:34.519435    1927 host.go:66] Checking if "addons-439000" exists ...
	I0917 09:56:34.520217    1927 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 09:56:34.520224    1927 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0917 09:56:34.537248    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:34.537295    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:34.568142    1927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 09:56:34.572806    1927 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 09:56:34.578820    1927 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 09:56:34.578827    1927 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 09:56:34.587150    1927 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 09:56:34.587160    1927 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 09:56:34.592853    1927 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 09:56:34.592859    1927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 09:56:34.598744    1927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 09:56:34.782189    1927 addons.go:475] Verifying addon gcp-auth=true in "addons-439000"
	I0917 09:56:34.788568    1927 out.go:177] * Verifying gcp-auth addon...
	I0917 09:56:34.795994    1927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 09:56:34.797018    1927 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 09:56:34.898888    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:35.007106    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:35.012503    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:35.401908    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:35.507561    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:35.512345    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:35.679945    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:35.894628    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:36.006972    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:36.012303    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:36.401041    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:36.507064    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:36.512698    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:36.894395    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:37.006965    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:37.012709    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:37.394568    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:37.507161    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:37.512514    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:37.680033    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:37.901825    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:38.006610    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:38.012630    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:38.394833    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:38.507100    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:38.512146    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:38.892607    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:39.174846    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:39.174908    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:39.399397    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:39.507073    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:39.513857    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:39.680798    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:39.894549    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:40.008589    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:40.013153    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:40.401321    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:40.506842    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:40.512618    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:40.894578    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:41.006998    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:41.012489    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:41.394719    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:41.506855    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:41.512319    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:41.894750    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:42.006965    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:42.012351    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:42.180020    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:42.397158    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:42.509096    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:42.514280    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:42.895652    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:43.007590    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:43.013063    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:43.394597    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:43.506918    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:43.513245    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:43.893704    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:44.108423    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:44.108509    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:44.400293    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:44.506825    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:44.512186    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:44.683286    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:44.894059    1927 kapi.go:107] duration metric: took 18.003314s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 09:56:45.007305    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:45.012036    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:45.505608    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:45.513307    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:46.008472    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:46.012329    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:46.505184    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:46.512632    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:47.007084    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:47.012294    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:47.180060    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:47.506469    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:47.512343    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:48.006764    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:48.012250    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:48.506846    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:48.512217    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:49.006925    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:49.012196    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:49.189041    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:49.506845    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:49.512067    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:50.008481    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:50.013024    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:50.506873    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:50.512004    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:51.006659    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:51.012004    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:51.509127    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:51.513039    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:51.683079    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:52.006855    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:52.012021    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:52.506926    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:52.512001    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:53.006592    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:53.012205    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:53.506713    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:53.512174    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:54.007222    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:54.012147    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:54.179406    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:54.506871    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:54.512152    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:55.006907    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:55.011927    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:55.507232    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:55.512256    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:56.004749    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:56.012704    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:56.506970    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:56.512137    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:56.679767    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:57.006738    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:57.012152    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:57.506744    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:57.512086    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:58.006759    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:58.012163    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:58.506853    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:58.511780    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:58.680128    1927 pod_ready.go:103] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:59.006870    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:59.012595    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:59.506686    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:59.511876    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:00.022694    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:00.022994    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:00.506710    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:00.511776    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:00.679540    1927 pod_ready.go:93] pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace has status "Ready":"True"
	I0917 09:57:00.679548    1927 pod_ready.go:82] duration metric: took 36.004682583s for pod "coredns-7c65d6cfc9-qwcbq" in "kube-system" namespace to be "Ready" ...
	I0917 09:57:00.679553    1927 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x5lc2" in "kube-system" namespace to be "Ready" ...
	I0917 09:57:00.680302    1927 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-x5lc2" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-x5lc2" not found
	I0917 09:57:00.680308    1927 pod_ready.go:82] duration metric: took 751.875µs for pod "coredns-7c65d6cfc9-x5lc2" in "kube-system" namespace to be "Ready" ...
	E0917 09:57:00.680312    1927 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-x5lc2" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-x5lc2" not found
	I0917 09:57:00.680315    1927 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0917 09:57:00.682146    1927 pod_ready.go:93] pod "etcd-addons-439000" in "kube-system" namespace has status "Ready":"True"
	I0917 09:57:00.682150    1927 pod_ready.go:82] duration metric: took 1.832292ms for pod "etcd-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0917 09:57:00.682154    1927 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0917 09:57:00.684078    1927 pod_ready.go:93] pod "kube-apiserver-addons-439000" in "kube-system" namespace has status "Ready":"True"
	I0917 09:57:00.684084    1927 pod_ready.go:82] duration metric: took 1.927583ms for pod "kube-apiserver-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0917 09:57:00.684088    1927 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0917 09:57:00.685795    1927 pod_ready.go:93] pod "kube-controller-manager-addons-439000" in "kube-system" namespace has status "Ready":"True"
	I0917 09:57:00.685799    1927 pod_ready.go:82] duration metric: took 1.706917ms for pod "kube-controller-manager-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0917 09:57:00.685803    1927 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ps5pn" in "kube-system" namespace to be "Ready" ...
	I0917 09:57:00.880192    1927 pod_ready.go:93] pod "kube-proxy-ps5pn" in "kube-system" namespace has status "Ready":"True"
	I0917 09:57:00.880202    1927 pod_ready.go:82] duration metric: took 194.399875ms for pod "kube-proxy-ps5pn" in "kube-system" namespace to be "Ready" ...
	I0917 09:57:00.880206    1927 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0917 09:57:01.018179    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:01.018240    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:01.280172    1927 pod_ready.go:93] pod "kube-scheduler-addons-439000" in "kube-system" namespace has status "Ready":"True"
	I0917 09:57:01.280182    1927 pod_ready.go:82] duration metric: took 399.978667ms for pod "kube-scheduler-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0917 09:57:01.280185    1927 pod_ready.go:39] duration metric: took 36.6099725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 09:57:01.280194    1927 api_server.go:52] waiting for apiserver process to appear ...
	I0917 09:57:01.280266    1927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 09:57:01.287025    1927 api_server.go:72] duration metric: took 36.943048416s to wait for apiserver process to appear ...
	I0917 09:57:01.287036    1927 api_server.go:88] waiting for apiserver healthz status ...
	I0917 09:57:01.287047    1927 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0917 09:57:01.290751    1927 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0917 09:57:01.291316    1927 api_server.go:141] control plane version: v1.31.1
	I0917 09:57:01.291324    1927 api_server.go:131] duration metric: took 4.285167ms to wait for apiserver health ...
	I0917 09:57:01.291328    1927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 09:57:01.483986    1927 system_pods.go:59] 17 kube-system pods found
	I0917 09:57:01.484000    1927 system_pods.go:61] "coredns-7c65d6cfc9-qwcbq" [098051f2-e649-47b5-bfe4-b69482dc796c] Running
	I0917 09:57:01.484005    1927 system_pods.go:61] "csi-hostpath-attacher-0" [b188d268-8b3b-4328-bab1-caa6a20014b0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 09:57:01.484008    1927 system_pods.go:61] "csi-hostpath-resizer-0" [bb989f60-e72d-4094-89df-5ae06cae376b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 09:57:01.484011    1927 system_pods.go:61] "csi-hostpathplugin-m7bc6" [fd1f2357-a01a-4824-a243-4a0b949c1776] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 09:57:01.484014    1927 system_pods.go:61] "etcd-addons-439000" [a14109bd-8c65-44eb-9abe-5a171b4f94b6] Running
	I0917 09:57:01.484017    1927 system_pods.go:61] "kube-apiserver-addons-439000" [6d4777fb-08b4-4ab1-92e8-148ec27e2df2] Running
	I0917 09:57:01.484019    1927 system_pods.go:61] "kube-controller-manager-addons-439000" [69a9c006-d373-426b-97a7-bc60d5880848] Running
	I0917 09:57:01.484021    1927 system_pods.go:61] "kube-ingress-dns-minikube" [320051f8-98b7-4a0a-8e53-aedcbb52e0fe] Running
	I0917 09:57:01.484023    1927 system_pods.go:61] "kube-proxy-ps5pn" [05ba4a7c-63d2-43ab-8a95-76d4774b3847] Running
	I0917 09:57:01.484025    1927 system_pods.go:61] "kube-scheduler-addons-439000" [c2f84a53-def0-4e79-8ade-8d4cb20db35c] Running
	I0917 09:57:01.484027    1927 system_pods.go:61] "metrics-server-84c5f94fbc-4dqp2" [bbed7153-e5e6-4bce-9f35-b76c294d2683] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 09:57:01.484030    1927 system_pods.go:61] "nvidia-device-plugin-daemonset-nms9k" [632af194-297b-4de8-a9f0-5ef6eb83279f] Running
	I0917 09:57:01.484032    1927 system_pods.go:61] "registry-66c9cd494c-zhs2b" [d93d54e8-7ff9-4034-a317-f6c97924ce18] Running
	I0917 09:57:01.484034    1927 system_pods.go:61] "registry-proxy-5fb54" [f61a3ff0-e6a6-463d-8803-ff49ba95d4f4] Running
	I0917 09:57:01.484036    1927 system_pods.go:61] "snapshot-controller-56fcc65765-7bsxr" [89a15129-6189-4899-91e8-e73c15034096] Running
	I0917 09:57:01.484038    1927 system_pods.go:61] "snapshot-controller-56fcc65765-9vm2c" [f7b1b64d-6342-4601-87ba-d4f2125502e8] Running
	I0917 09:57:01.484040    1927 system_pods.go:61] "storage-provisioner" [7116f7d2-87c9-465f-952a-7baa6ccf0c27] Running
	I0917 09:57:01.484043    1927 system_pods.go:74] duration metric: took 192.7155ms to wait for pod list to return data ...
	I0917 09:57:01.484049    1927 default_sa.go:34] waiting for default service account to be created ...
	I0917 09:57:01.506376    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:01.511798    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:01.680441    1927 default_sa.go:45] found service account: "default"
	I0917 09:57:01.680452    1927 default_sa.go:55] duration metric: took 196.403166ms for default service account to be created ...
	I0917 09:57:01.680456    1927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 09:57:01.882735    1927 system_pods.go:86] 17 kube-system pods found
	I0917 09:57:01.882746    1927 system_pods.go:89] "coredns-7c65d6cfc9-qwcbq" [098051f2-e649-47b5-bfe4-b69482dc796c] Running
	I0917 09:57:01.882751    1927 system_pods.go:89] "csi-hostpath-attacher-0" [b188d268-8b3b-4328-bab1-caa6a20014b0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 09:57:01.882755    1927 system_pods.go:89] "csi-hostpath-resizer-0" [bb989f60-e72d-4094-89df-5ae06cae376b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 09:57:01.882758    1927 system_pods.go:89] "csi-hostpathplugin-m7bc6" [fd1f2357-a01a-4824-a243-4a0b949c1776] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 09:57:01.882760    1927 system_pods.go:89] "etcd-addons-439000" [a14109bd-8c65-44eb-9abe-5a171b4f94b6] Running
	I0917 09:57:01.882762    1927 system_pods.go:89] "kube-apiserver-addons-439000" [6d4777fb-08b4-4ab1-92e8-148ec27e2df2] Running
	I0917 09:57:01.882764    1927 system_pods.go:89] "kube-controller-manager-addons-439000" [69a9c006-d373-426b-97a7-bc60d5880848] Running
	I0917 09:57:01.882767    1927 system_pods.go:89] "kube-ingress-dns-minikube" [320051f8-98b7-4a0a-8e53-aedcbb52e0fe] Running
	I0917 09:57:01.882769    1927 system_pods.go:89] "kube-proxy-ps5pn" [05ba4a7c-63d2-43ab-8a95-76d4774b3847] Running
	I0917 09:57:01.882771    1927 system_pods.go:89] "kube-scheduler-addons-439000" [c2f84a53-def0-4e79-8ade-8d4cb20db35c] Running
	I0917 09:57:01.882773    1927 system_pods.go:89] "metrics-server-84c5f94fbc-4dqp2" [bbed7153-e5e6-4bce-9f35-b76c294d2683] Running
	I0917 09:57:01.882775    1927 system_pods.go:89] "nvidia-device-plugin-daemonset-nms9k" [632af194-297b-4de8-a9f0-5ef6eb83279f] Running
	I0917 09:57:01.882777    1927 system_pods.go:89] "registry-66c9cd494c-zhs2b" [d93d54e8-7ff9-4034-a317-f6c97924ce18] Running
	I0917 09:57:01.882779    1927 system_pods.go:89] "registry-proxy-5fb54" [f61a3ff0-e6a6-463d-8803-ff49ba95d4f4] Running
	I0917 09:57:01.882781    1927 system_pods.go:89] "snapshot-controller-56fcc65765-7bsxr" [89a15129-6189-4899-91e8-e73c15034096] Running
	I0917 09:57:01.882784    1927 system_pods.go:89] "snapshot-controller-56fcc65765-9vm2c" [f7b1b64d-6342-4601-87ba-d4f2125502e8] Running
	I0917 09:57:01.882786    1927 system_pods.go:89] "storage-provisioner" [7116f7d2-87c9-465f-952a-7baa6ccf0c27] Running
	I0917 09:57:01.882789    1927 system_pods.go:126] duration metric: took 202.334291ms to wait for k8s-apps to be running ...
	I0917 09:57:01.882794    1927 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 09:57:01.882857    1927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 09:57:01.889161    1927 system_svc.go:56] duration metric: took 6.361959ms WaitForService to wait for kubelet
	I0917 09:57:01.889170    1927 kubeadm.go:582] duration metric: took 37.545206166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 09:57:01.889178    1927 node_conditions.go:102] verifying NodePressure condition ...
	I0917 09:57:02.005195    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:02.012247    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:02.080401    1927 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 09:57:02.080412    1927 node_conditions.go:123] node cpu capacity is 2
	I0917 09:57:02.080417    1927 node_conditions.go:105] duration metric: took 191.239458ms to run NodePressure ...
	I0917 09:57:02.080423    1927 start.go:241] waiting for startup goroutines ...
	I0917 09:57:02.506853    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:02.511985    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:03.005021    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:03.012641    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:03.506579    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:03.511895    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:04.006729    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:04.011741    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:04.506815    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:04.511855    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:05.006306    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:05.011804    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:05.506735    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:05.511696    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:06.006673    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:06.012067    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:06.506728    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:06.511948    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:07.006206    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:07.012260    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:07.506690    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:07.511717    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:08.006426    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:08.013276    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:08.506692    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:08.511775    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:09.006444    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:09.011832    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:09.506757    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:09.512116    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:10.006505    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:10.011707    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:10.506765    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:10.512101    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:11.007905    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:11.012662    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:11.511224    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:11.516517    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:12.006359    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:12.011762    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:12.506678    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:12.512166    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:13.006666    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:13.011647    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:13.508556    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:13.513385    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:14.006942    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:14.011465    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:14.505420    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:14.512318    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:15.006312    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:15.011808    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:15.507700    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:15.512667    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:16.006850    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:16.012124    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:16.506434    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:16.511588    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:17.006476    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:17.011510    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:17.506721    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:17.512280    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:18.006002    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:18.011772    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:18.505674    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:18.511827    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:19.006136    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:19.011691    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:19.506351    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:19.511497    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:20.006320    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:20.011859    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:20.506315    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:20.511875    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:21.006202    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:21.011613    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:21.506319    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:21.511782    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:22.006124    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:22.011629    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:22.506348    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:22.511637    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:23.006117    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:23.014145    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:23.506345    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:23.607707    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:24.006000    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:24.011574    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:24.506549    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:24.511609    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:25.006124    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:25.011594    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:25.506164    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:25.511638    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:26.006273    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:26.013828    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:26.507020    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:26.512127    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:27.006691    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:27.011410    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:27.506906    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:27.520425    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:28.006179    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:28.012446    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:28.514174    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:28.517152    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:29.006646    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:29.011563    1927 kapi.go:107] duration metric: took 59.502682708s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 09:57:29.506556    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:30.007275    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:30.506737    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:31.008002    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:31.508069    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:32.007700    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:32.506183    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:33.006572    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:33.506182    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:34.004085    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:34.506331    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:35.005819    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:35.506065    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:36.004307    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:36.505986    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:37.005855    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:37.505935    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:38.006006    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:38.506395    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:39.006008    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:39.506166    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:40.007363    1927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:40.505774    1927 kapi.go:107] duration metric: took 1m11.003982708s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 09:57:57.300582    1927 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 09:57:57.300603    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:57.799128    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:58.297129    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:58.799527    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:59.297270    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:59.798619    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:00.300212    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:00.803276    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:01.301459    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:01.798368    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:02.299566    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:02.798174    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:03.299617    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:03.800013    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:04.306106    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:04.799969    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:05.304847    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:05.801765    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:06.304606    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:06.805096    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:07.302311    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:07.800526    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:08.299179    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:08.799907    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:09.298302    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:09.798361    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:10.305888    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:10.800278    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:11.303686    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:11.798500    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:12.302545    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:12.799308    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:13.300193    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:13.800073    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:14.303899    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:14.800682    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:15.299298    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:15.798620    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:16.299421    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:16.800695    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:17.302098    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:17.799280    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:18.300160    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:18.799998    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:19.298179    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:19.798614    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:20.297834    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:20.799914    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:21.301442    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:21.801540    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:22.301073    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:22.801403    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:23.300563    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:23.800420    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:24.303377    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:24.800977    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:25.299605    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:25.799037    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:26.303106    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:26.799833    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:27.304701    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:27.803584    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:28.299448    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:28.799288    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:29.295975    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:29.798027    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:30.299682    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:30.799728    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:31.301249    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:31.799952    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:32.300707    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:32.798014    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:33.300004    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:33.799671    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:34.302792    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:34.799133    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:35.303279    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:35.801634    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:36.303305    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:36.800454    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:37.299274    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:37.801462    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:38.297824    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:38.797948    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:39.298101    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:39.797661    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:40.298707    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:40.798688    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:41.298697    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:41.798803    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:42.301051    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:42.799303    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:43.297545    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:43.798053    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:44.302490    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:44.798351    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:45.299599    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:45.797355    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:46.304175    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:46.797798    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:47.300431    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:47.799304    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:48.301778    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:48.799445    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:49.298496    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:49.797357    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:50.299289    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:50.797152    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:51.299622    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:51.798956    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:52.303805    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:52.799296    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:53.303903    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:53.798973    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:54.298575    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:54.799107    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:55.300155    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:55.799411    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:56.304056    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:56.801540    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:57.302442    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:57.798996    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:58.300158    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:58.800205    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:59.298795    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:59.797437    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:00.302441    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:00.797946    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:01.298150    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:01.797116    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:02.297088    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:02.796925    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:03.297811    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:03.797567    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:04.299296    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:04.797520    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:05.297348    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:05.798806    1927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:59:06.302531    1927 kapi.go:107] duration metric: took 2m31.509115542s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 09:59:06.306006    1927 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-439000 cluster.
	I0917 09:59:06.310717    1927 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 09:59:06.315727    1927 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 09:59:06.318783    1927 out.go:177] * Enabled addons: storage-provisioner, inspektor-gadget, default-storageclass, yakd, storage-provisioner-rancher, cloud-spanner, nvidia-device-plugin, volcano, metrics-server, ingress-dns, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0917 09:59:06.322698    1927 addons.go:510] duration metric: took 2m41.980836041s for enable addons: enabled=[storage-provisioner inspektor-gadget default-storageclass yakd storage-provisioner-rancher cloud-spanner nvidia-device-plugin volcano metrics-server ingress-dns volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0917 09:59:06.322726    1927 start.go:246] waiting for cluster config update ...
	I0917 09:59:06.322746    1927 start.go:255] writing updated cluster config ...
	I0917 09:59:06.323433    1927 ssh_runner.go:195] Run: rm -f paused
	I0917 09:59:06.484212    1927 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0917 09:59:06.488659    1927 out.go:201] 
	W0917 09:59:06.491807    1927 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0917 09:59:06.495656    1927 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0917 09:59:06.503704    1927 out.go:177] * Done! kubectl is now configured to use "addons-439000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 17:08:55 addons-439000 dockerd[1271]: time="2024-09-17T17:08:55.217039531Z" level=info msg="ignoring event" container=eb7d4463b8b35682172fc29c0ba80f579931d03ea0cc53e028b0e601321430d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.217046284Z" level=info msg="shim disconnected" id=eb7d4463b8b35682172fc29c0ba80f579931d03ea0cc53e028b0e601321430d7 namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.217232197Z" level=warning msg="cleaning up after shim disconnected" id=eb7d4463b8b35682172fc29c0ba80f579931d03ea0cc53e028b0e601321430d7 namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.217236908Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1271]: time="2024-09-17T17:08:55.458381128Z" level=info msg="ignoring event" container=2963f73249f916b712d882879bb888a81268a285cdec94b3d123cebc5e2c8678 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.458694222Z" level=info msg="shim disconnected" id=2963f73249f916b712d882879bb888a81268a285cdec94b3d123cebc5e2c8678 namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.459382603Z" level=warning msg="cleaning up after shim disconnected" id=2963f73249f916b712d882879bb888a81268a285cdec94b3d123cebc5e2c8678 namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.459404946Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.623741695Z" level=info msg="shim disconnected" id=31f1e1c00a4ec9a26c1052cbc332e35c866839de072fa4ebebdcf484eb6de3a6 namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.623777085Z" level=warning msg="cleaning up after shim disconnected" id=31f1e1c00a4ec9a26c1052cbc332e35c866839de072fa4ebebdcf484eb6de3a6 namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.623781629Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1271]: time="2024-09-17T17:08:55.624744212Z" level=info msg="ignoring event" container=31f1e1c00a4ec9a26c1052cbc332e35c866839de072fa4ebebdcf484eb6de3a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.639260574Z" level=warning msg="cleanup warnings time=\"2024-09-17T17:08:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1271]: time="2024-09-17T17:08:55.670864994Z" level=info msg="ignoring event" container=5679a53c99ab70ff1c5a77f0afe784b2482024c40a90d3436a3accc5fe925024 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.670938026Z" level=info msg="shim disconnected" id=5679a53c99ab70ff1c5a77f0afe784b2482024c40a90d3436a3accc5fe925024 namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.670987339Z" level=warning msg="cleaning up after shim disconnected" id=5679a53c99ab70ff1c5a77f0afe784b2482024c40a90d3436a3accc5fe925024 namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.670994550Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1271]: time="2024-09-17T17:08:55.727366475Z" level=info msg="ignoring event" container=3dd0a7db53c2570e50dbaa010630626adac707cdfd1a8901b5ff15f6893c5a5a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.727803372Z" level=info msg="shim disconnected" id=3dd0a7db53c2570e50dbaa010630626adac707cdfd1a8901b5ff15f6893c5a5a namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.727836886Z" level=warning msg="cleaning up after shim disconnected" id=3dd0a7db53c2570e50dbaa010630626adac707cdfd1a8901b5ff15f6893c5a5a namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.727841305Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1271]: time="2024-09-17T17:08:55.783262735Z" level=info msg="ignoring event" container=2afb0a1fea3b6ddc4b8cb6a6506d54dc177d96f248fec93d3881dbe07bbb6418 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.784943254Z" level=info msg="shim disconnected" id=2afb0a1fea3b6ddc4b8cb6a6506d54dc177d96f248fec93d3881dbe07bbb6418 namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.785082689Z" level=warning msg="cleaning up after shim disconnected" id=2afb0a1fea3b6ddc4b8cb6a6506d54dc177d96f248fec93d3881dbe07bbb6418 namespace=moby
	Sep 17 17:08:55 addons-439000 dockerd[1278]: time="2024-09-17T17:08:55.785087066Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	0ef02eb90deb1       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                8 seconds ago       Running             nginx                      0                   214e5db817a36       nginx
	d14014599d644       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   4bf51e9f60ee8       gcp-auth-89d5ffd79-9ww6x
	da0b117a371bc       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   4b0597f4ab28e       ingress-nginx-controller-bc57996ff-2qqz9
	368734be2edef       420193b27261a                                                                                                                11 minutes ago      Exited              patch                      1                   d37d75f364e15       ingress-nginx-admission-patch-jmdkc
	dfd9b22614f63       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   119bd84cb6304       ingress-nginx-admission-create-v4t4r
	5792f7468a96a       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       11 minutes ago      Running             local-path-provisioner     0                   ad040372cfa11       local-path-provisioner-86d989889c-6v4tt
	ad40599477606       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   ebb266f19726a       nvidia-device-plugin-daemonset-nms9k
	5679a53c99ab7       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy             0                   2afb0a1fea3b6       registry-proxy-5fb54
	da088c070c197       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   15a3abb622bf8       cloud-spanner-emulator-769b77f747-zmtnt
	31f1e1c00a4ec       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                   0                   3dd0a7db53c25       registry-66c9cd494c-zhs2b
	540deb175d9f7       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   cdfedd7480f8e       yakd-dashboard-67d98fc6b-fgfrb
	255c6730c9436       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   f6e2c095a6858       storage-provisioner
	8c0941a912f74       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   31b4c90ee246a       coredns-7c65d6cfc9-qwcbq
	8c5599546105a       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   c2e51c1577d92       kube-proxy-ps5pn
	dca44dc00dc70       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   9df22eadbfa3f       kube-scheduler-addons-439000
	78259aa3e6238       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   a4eeecfd64c91       etcd-addons-439000
	b3996f04027f7       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   5bae26d236bb2       kube-controller-manager-addons-439000
	655a8eb962ee9       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   47a24919fade6       kube-apiserver-addons-439000
	
	
	==> controller_ingress [da0b117a371b] <==
	W0917 17:08:45.502719       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0917 17:08:45.502782       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"855a1571-7658-4210-96c8-d80e6050dfcd", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2704", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0917 17:08:45.502786       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0917 17:08:45.518124       7 controller.go:213] "Backend successfully reloaded"
	I0917 17:08:45.518439       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-2qqz9", UID:"0c798cfe-f946-43d7-966f-58277a057dff", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0917 17:08:48.837473       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0917 17:08:48.837534       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0917 17:08:48.872560       7 controller.go:213] "Backend successfully reloaded"
	I0917 17:08:48.872915       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-2qqz9", UID:"0c798cfe-f946-43d7-966f-58277a057dff", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0917 17:08:54.772888       7 controller.go:1110] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0917 17:08:54.784521       7 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.012s renderingIngressLength:2 renderingIngressTime:0s admissionTime:0.012s testedConfigurationSize:26.2kB}
	I0917 17:08:54.784567       7 main.go:107] "successfully validated configuration, accepting" ingress="kube-system/example-ingress"
	I0917 17:08:54.867131       7 store.go:440] "Found valid IngressClass" ingress="kube-system/example-ingress" ingressclass="nginx"
	I0917 17:08:54.867632       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"eb14988f-319d-48bb-9ba8-1b09e535fd55", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2749", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0917 17:08:55.503855       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0917 17:08:55.522136       7 controller.go:213] "Backend successfully reloaded"
	I0917 17:08:55.522404       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-2qqz9", UID:"0c798cfe-f946-43d7-966f-58277a057dff", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0917 17:08:55.880839       7 sigterm.go:36] "Received SIGTERM, shutting down"
	I0917 17:08:55.880861       7 nginx.go:393] "Shutting down controller queues"
	E0917 17:08:55.881749       7 status.go:120] "error obtaining running IP address" err="pods is forbidden: User \"system:serviceaccount:ingress-nginx:ingress-nginx\" cannot list resource \"pods\" in API group \"\" in the namespace \"ingress-nginx\""
	I0917 17:08:55.881758       7 nginx.go:401] "Stopping admission controller"
	E0917 17:08:55.881779       7 nginx.go:340] "Error listening for TLS connections" err="http: Server closed"
	I0917 17:08:55.881825       7 nginx.go:409] "Stopping NGINX process"
	2024/09/17 17:08:55 [notice] 315#315: signal process started
	10.244.0.1 - - [17/Sep/2024:17:08:54 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.5.0" 80 0.001 [default-nginx-80] [] 10.244.0.30:80 615 0.000 200 ce81cc2fafd72f9ba4877c24c060ca8f
	
	
	==> coredns [8c0941a912f7] <==
	[INFO] 10.244.0.20:32853 - 25893 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014131s
	[INFO] 10.244.0.20:55061 - 12665 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054732s
	[INFO] 10.244.0.20:55061 - 4229 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038225s
	[INFO] 10.244.0.20:55061 - 42661 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038058s
	[INFO] 10.244.0.20:55061 - 16560 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000024428s
	[INFO] 10.244.0.20:55061 - 6308 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000033931s
	[INFO] 10.244.0.20:32853 - 57209 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002251s
	[INFO] 10.244.0.20:32853 - 45412 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000024885s
	[INFO] 10.244.0.20:32853 - 29814 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011047s
	[INFO] 10.244.0.20:32853 - 46048 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010712s
	[INFO] 10.244.0.20:32853 - 13155 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000030679s
	[INFO] 10.244.0.20:38829 - 51843 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048271s
	[INFO] 10.244.0.20:49869 - 29248 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000078784s
	[INFO] 10.244.0.20:38829 - 23674 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015881s
	[INFO] 10.244.0.20:49869 - 38422 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015465s
	[INFO] 10.244.0.20:38829 - 33299 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000014756s
	[INFO] 10.244.0.20:49869 - 52605 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015006s
	[INFO] 10.244.0.20:38829 - 62497 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000138477s
	[INFO] 10.244.0.20:38829 - 64957 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040559s
	[INFO] 10.244.0.20:49869 - 32697 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037683s
	[INFO] 10.244.0.20:49869 - 43189 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011338s
	[INFO] 10.244.0.20:38829 - 40194 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000018591s
	[INFO] 10.244.0.20:49869 - 2758 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000017757s
	[INFO] 10.244.0.20:38829 - 2068 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000021468s
	[INFO] 10.244.0.20:49869 - 48694 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000015923s
	
	
	==> describe nodes <==
	Name:               addons-439000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-439000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=addons-439000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T09_56_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-439000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 16:56:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-439000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:08:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:08:54 +0000   Tue, 17 Sep 2024 16:56:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:08:54 +0000   Tue, 17 Sep 2024 16:56:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:08:54 +0000   Tue, 17 Sep 2024 16:56:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:08:54 +0000   Tue, 17 Sep 2024 16:56:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-439000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb345283350442cda1a59b009bfb9825
	  System UUID:                bb345283350442cda1a59b009bfb9825
	  Boot ID:                    9c0fcae0-2082-4b81-a4e3-7b2240bf3dc7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-zmtnt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-kzwdk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  gcp-auth                    gcp-auth-89d5ffd79-9ww6x                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-qwcbq                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-439000                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-439000               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-439000      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-ps5pn                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-439000               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-nms9k       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-6v4tt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-fgfrb             0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             298Mi (7%)  426Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)  kubelet          Node addons-439000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet          Node addons-439000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node addons-439000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12m                kubelet          Node addons-439000 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-439000 event: Registered Node addons-439000 in Controller
	
	
	==> dmesg <==
	[Sep17 16:57] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.062922] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.841540] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.022386] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.734240] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.057258] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.635893] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.344948] kauditd_printk_skb: 18 callbacks suppressed
	[Sep17 16:58] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.143807] kauditd_printk_skb: 46 callbacks suppressed
	[Sep17 16:59] kauditd_printk_skb: 2 callbacks suppressed
	[ +22.316827] kauditd_printk_skb: 9 callbacks suppressed
	[ +10.851632] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.400310] kauditd_printk_skb: 20 callbacks suppressed
	[Sep17 17:00] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 17:02] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 17:07] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.919202] kauditd_printk_skb: 7 callbacks suppressed
	[Sep17 17:08] kauditd_printk_skb: 10 callbacks suppressed
	[ +11.585253] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.788661] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.625030] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.426184] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.281094] kauditd_printk_skb: 4 callbacks suppressed
	[ +14.674418] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [78259aa3e623] <==
	{"level":"info","ts":"2024-09-17T16:56:15.954688Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-439000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T16:56:15.954776Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T16:56:15.954911Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:15.954967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:15.954991Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:15.955030Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T16:56:15.955050Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T16:56:15.955068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T16:56:15.955477Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T16:56:15.956099Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T16:56:15.956660Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-17T16:56:15.961678Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-17T16:56:39.223773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.386503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:56:39.223813Z","caller":"traceutil/trace.go:171","msg":"trace[1481171667] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:956; }","duration":"160.438153ms","start":"2024-09-17T16:56:39.063367Z","end":"2024-09-17T16:56:39.223805Z","steps":["trace[1481171667] 'range keys from in-memory index tree'  (duration: 160.35597ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:56:39.223920Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.333452ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:56:39.223927Z","caller":"traceutil/trace.go:171","msg":"trace[168040434] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:956; }","duration":"166.341542ms","start":"2024-09-17T16:56:39.057584Z","end":"2024-09-17T16:56:39.223926Z","steps":["trace[168040434] 'range keys from in-memory index tree'  (duration: 166.310843ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:56:44.140787Z","caller":"traceutil/trace.go:171","msg":"trace[1395539914] linearizableReadLoop","detail":"{readStateIndex:994; appliedIndex:993; }","duration":"100.792828ms","start":"2024-09-17T16:56:44.039985Z","end":"2024-09-17T16:56:44.140778Z","steps":["trace[1395539914] 'read index received'  (duration: 100.714273ms)","trace[1395539914] 'applied index is now lower than readState.Index'  (duration: 78.389µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T16:56:44.140837Z","caller":"traceutil/trace.go:171","msg":"trace[622126844] transaction","detail":"{read_only:false; response_revision:973; number_of_response:1; }","duration":"104.627028ms","start":"2024-09-17T16:56:44.036206Z","end":"2024-09-17T16:56:44.140833Z","steps":["trace[622126844] 'process raft request'  (duration: 104.515971ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:56:44.140959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.964194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:56:44.140974Z","caller":"traceutil/trace.go:171","msg":"trace[452937718] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:973; }","duration":"100.987179ms","start":"2024-09-17T16:56:44.039984Z","end":"2024-09-17T16:56:44.140971Z","steps":["trace[452937718] 'agreement among raft nodes before linearized reading'  (duration: 100.938301ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:56:48.283520Z","caller":"traceutil/trace.go:171","msg":"trace[233684249] transaction","detail":"{read_only:false; response_revision:989; number_of_response:1; }","duration":"155.318285ms","start":"2024-09-17T16:56:48.128192Z","end":"2024-09-17T16:56:48.283510Z","steps":["trace[233684249] 'process raft request'  (duration: 152.227497ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:59:26.695453Z","caller":"traceutil/trace.go:171","msg":"trace[1931019696] transaction","detail":"{read_only:false; response_revision:1518; number_of_response:1; }","duration":"255.131324ms","start":"2024-09-17T16:59:26.440309Z","end":"2024-09-17T16:59:26.695440Z","steps":["trace[1931019696] 'process raft request'  (duration: 255.077965ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:06:16.463309Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1840}
	{"level":"info","ts":"2024-09-17T17:06:16.552182Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1840,"took":"86.206341ms","hash":2355422742,"current-db-size-bytes":8409088,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4788224,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-17T17:06:16.554859Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2355422742,"revision":1840,"compact-revision":-1}
	
	
	==> gcp-auth [d14014599d64] <==
	2024/09/17 16:59:05 GCP Auth Webhook started!
	2024/09/17 16:59:21 Ready to marshal response ...
	2024/09/17 16:59:21 Ready to write response ...
	2024/09/17 16:59:21 Ready to marshal response ...
	2024/09/17 16:59:21 Ready to write response ...
	2024/09/17 16:59:43 Ready to marshal response ...
	2024/09/17 16:59:43 Ready to write response ...
	2024/09/17 16:59:44 Ready to marshal response ...
	2024/09/17 16:59:44 Ready to write response ...
	2024/09/17 16:59:44 Ready to marshal response ...
	2024/09/17 16:59:44 Ready to write response ...
	2024/09/17 17:07:48 Ready to marshal response ...
	2024/09/17 17:07:48 Ready to write response ...
	2024/09/17 17:07:55 Ready to marshal response ...
	2024/09/17 17:07:55 Ready to write response ...
	2024/09/17 17:08:14 Ready to marshal response ...
	2024/09/17 17:08:14 Ready to write response ...
	2024/09/17 17:08:45 Ready to marshal response ...
	2024/09/17 17:08:45 Ready to write response ...
	2024/09/17 17:08:54 Ready to marshal response ...
	2024/09/17 17:08:54 Ready to write response ...
	
	
	==> kernel <==
	 17:08:56 up 12 min,  0 users,  load average: 1.26, 0.77, 0.45
	Linux addons-439000 5.10.207 #1 SMP PREEMPT Mon Sep 16 12:01:57 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [655a8eb962ee] <==
	I0917 16:59:34.691456       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0917 16:59:35.159335       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0917 16:59:35.386691       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0917 16:59:35.571633       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0917 16:59:35.583501       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0917 16:59:35.583533       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0917 16:59:35.692256       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0917 16:59:35.765857       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0917 17:07:56.398163       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0917 17:08:29.443978       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:08:29.444004       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:08:29.457139       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:08:29.461056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:08:29.472848       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:08:29.472869       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:08:29.488345       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:08:29.488561       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 17:08:30.462182       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 17:08:30.488412       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0917 17:08:30.580729       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0917 17:08:40.188281       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0917 17:08:41.202073       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0917 17:08:45.500814       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 17:08:45.599702       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.109.63"}
	I0917 17:08:54.918878       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.135.53"}
	
	
	==> kube-controller-manager [b3996f04027f] <==
	E0917 17:08:48.225708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:08:48.852378       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:48.852400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:08:49.060019       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:49.060045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:08:50.261020       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0917 17:08:50.315322       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:50.315404       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:08:52.108224       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:52.108342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:08:52.634914       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:52.635030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:08:54.150452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-439000"
	I0917 17:08:54.157902       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0917 17:08:54.157933       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 17:08:54.559474       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0917 17:08:54.559517       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 17:08:54.779735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.650831ms"
	I0917 17:08:54.783630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="3.782594ms"
	I0917 17:08:54.783787       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="21.551µs"
	I0917 17:08:54.788733       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.756µs"
	I0917 17:08:55.598653       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="2.293µs"
	I0917 17:08:55.841041       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0917 17:08:55.843324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="1.876µs"
	I0917 17:08:55.844994       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	
	
	==> kube-proxy [8c5599546105] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 16:56:25.123297       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 16:56:25.149226       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0917 16:56:25.149266       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 16:56:25.172197       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 16:56:25.172215       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 16:56:25.172243       1 server_linux.go:169] "Using iptables Proxier"
	I0917 16:56:25.173145       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 16:56:25.174734       1 server.go:483] "Version info" version="v1.31.1"
	I0917 16:56:25.174740       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 16:56:25.177367       1 config.go:199] "Starting service config controller"
	I0917 16:56:25.177382       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 16:56:25.177475       1 config.go:105] "Starting endpoint slice config controller"
	I0917 16:56:25.177477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 16:56:25.178780       1 config.go:328] "Starting node config controller"
	I0917 16:56:25.178785       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 16:56:25.278706       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 16:56:25.278735       1 shared_informer.go:320] Caches are synced for service config
	I0917 16:56:25.283817       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dca44dc00dc7] <==
	W0917 16:56:16.958475       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 16:56:16.958659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:16.958487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 16:56:16.958666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:16.958499       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 16:56:16.958673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:16.958510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 16:56:16.958693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:16.958514       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 16:56:16.958701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:16.958532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:16.958708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:17.804303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 16:56:17.804357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:17.842865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 16:56:17.842930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:17.854876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 16:56:17.854897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:17.864364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 16:56:17.864390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:17.883408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 16:56:17.883433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:17.921630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:17.921809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0917 16:56:18.556917       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:08:50 addons-439000 kubelet[2044]: E0917 17:08:50.161149    2044 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4baac25f-e05f-475a-9efc-73eba6effce3"
	Sep 17 17:08:54 addons-439000 kubelet[2044]: I0917 17:08:54.776010    2044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=6.82228989 podStartE2EDuration="9.775997049s" podCreationTimestamp="2024-09-17 17:08:45 +0000 UTC" firstStartedPulling="2024-09-17 17:08:45.990755303 +0000 UTC m=+746.871166254" lastFinishedPulling="2024-09-17 17:08:48.944462504 +0000 UTC m=+749.824873413" observedRunningTime="2024-09-17 17:08:49.47380373 +0000 UTC m=+750.354214681" watchObservedRunningTime="2024-09-17 17:08:54.775997049 +0000 UTC m=+755.656407958"
	Sep 17 17:08:54 addons-439000 kubelet[2044]: E0917 17:08:54.776212    2044 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a22ff28-1b0e-4ccd-822a-f6b707202278" containerName="gadget"
	Sep 17 17:08:54 addons-439000 kubelet[2044]: I0917 17:08:54.776232    2044 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a22ff28-1b0e-4ccd-822a-f6b707202278" containerName="gadget"
	Sep 17 17:08:54 addons-439000 kubelet[2044]: I0917 17:08:54.952584    2044 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td52g\" (UniqueName: \"kubernetes.io/projected/08c259b2-3180-4347-881a-bfed37a1da4c-kube-api-access-td52g\") pod \"hello-world-app-55bf9c44b4-kzwdk\" (UID: \"08c259b2-3180-4347-881a-bfed37a1da4c\") " pod="default/hello-world-app-55bf9c44b4-kzwdk"
	Sep 17 17:08:54 addons-439000 kubelet[2044]: I0917 17:08:54.952653    2044 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/08c259b2-3180-4347-881a-bfed37a1da4c-gcp-creds\") pod \"hello-world-app-55bf9c44b4-kzwdk\" (UID: \"08c259b2-3180-4347-881a-bfed37a1da4c\") " pod="default/hello-world-app-55bf9c44b4-kzwdk"
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.355006    2044 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ncc8\" (UniqueName: \"kubernetes.io/projected/320051f8-98b7-4a0a-8e53-aedcbb52e0fe-kube-api-access-6ncc8\") pod \"320051f8-98b7-4a0a-8e53-aedcbb52e0fe\" (UID: \"320051f8-98b7-4a0a-8e53-aedcbb52e0fe\") "
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.355943    2044 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320051f8-98b7-4a0a-8e53-aedcbb52e0fe-kube-api-access-6ncc8" (OuterVolumeSpecName: "kube-api-access-6ncc8") pod "320051f8-98b7-4a0a-8e53-aedcbb52e0fe" (UID: "320051f8-98b7-4a0a-8e53-aedcbb52e0fe"). InnerVolumeSpecName "kube-api-access-6ncc8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.456196    2044 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6ncc8\" (UniqueName: \"kubernetes.io/projected/320051f8-98b7-4a0a-8e53-aedcbb52e0fe-kube-api-access-6ncc8\") on node \"addons-439000\" DevicePath \"\""
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.556732    2044 scope.go:117] "RemoveContainer" containerID="f89ea10e9c5e86b9d9d043b3c2a718cd3f60d0115056647f7e2562be4a1769a9"
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.557706    2044 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/97170b29-16c9-4962-b9bb-f831a3905112-gcp-creds\") pod \"97170b29-16c9-4962-b9bb-f831a3905112\" (UID: \"97170b29-16c9-4962-b9bb-f831a3905112\") "
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.557719    2044 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbf7b\" (UniqueName: \"kubernetes.io/projected/97170b29-16c9-4962-b9bb-f831a3905112-kube-api-access-pbf7b\") pod \"97170b29-16c9-4962-b9bb-f831a3905112\" (UID: \"97170b29-16c9-4962-b9bb-f831a3905112\") "
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.558120    2044 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97170b29-16c9-4962-b9bb-f831a3905112-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "97170b29-16c9-4962-b9bb-f831a3905112" (UID: "97170b29-16c9-4962-b9bb-f831a3905112"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.559828    2044 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97170b29-16c9-4962-b9bb-f831a3905112-kube-api-access-pbf7b" (OuterVolumeSpecName: "kube-api-access-pbf7b") pod "97170b29-16c9-4962-b9bb-f831a3905112" (UID: "97170b29-16c9-4962-b9bb-f831a3905112"). InnerVolumeSpecName "kube-api-access-pbf7b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.582239    2044 scope.go:117] "RemoveContainer" containerID="f89ea10e9c5e86b9d9d043b3c2a718cd3f60d0115056647f7e2562be4a1769a9"
	Sep 17 17:08:55 addons-439000 kubelet[2044]: E0917 17:08:55.582721    2044 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: f89ea10e9c5e86b9d9d043b3c2a718cd3f60d0115056647f7e2562be4a1769a9" containerID="f89ea10e9c5e86b9d9d043b3c2a718cd3f60d0115056647f7e2562be4a1769a9"
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.582737    2044 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"f89ea10e9c5e86b9d9d043b3c2a718cd3f60d0115056647f7e2562be4a1769a9"} err="failed to get container status \"f89ea10e9c5e86b9d9d043b3c2a718cd3f60d0115056647f7e2562be4a1769a9\": rpc error: code = Unknown desc = Error response from daemon: No such container: f89ea10e9c5e86b9d9d043b3c2a718cd3f60d0115056647f7e2562be4a1769a9"
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.659800    2044 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/97170b29-16c9-4962-b9bb-f831a3905112-gcp-creds\") on node \"addons-439000\" DevicePath \"\""
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.659815    2044 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pbf7b\" (UniqueName: \"kubernetes.io/projected/97170b29-16c9-4962-b9bb-f831a3905112-kube-api-access-pbf7b\") on node \"addons-439000\" DevicePath \"\""
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.861932    2044 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddmsg\" (UniqueName: \"kubernetes.io/projected/d93d54e8-7ff9-4034-a317-f6c97924ce18-kube-api-access-ddmsg\") pod \"d93d54e8-7ff9-4034-a317-f6c97924ce18\" (UID: \"d93d54e8-7ff9-4034-a317-f6c97924ce18\") "
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.861965    2044 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5tss\" (UniqueName: \"kubernetes.io/projected/f61a3ff0-e6a6-463d-8803-ff49ba95d4f4-kube-api-access-k5tss\") pod \"f61a3ff0-e6a6-463d-8803-ff49ba95d4f4\" (UID: \"f61a3ff0-e6a6-463d-8803-ff49ba95d4f4\") "
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.865603    2044 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93d54e8-7ff9-4034-a317-f6c97924ce18-kube-api-access-ddmsg" (OuterVolumeSpecName: "kube-api-access-ddmsg") pod "d93d54e8-7ff9-4034-a317-f6c97924ce18" (UID: "d93d54e8-7ff9-4034-a317-f6c97924ce18"). InnerVolumeSpecName "kube-api-access-ddmsg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.865646    2044 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f61a3ff0-e6a6-463d-8803-ff49ba95d4f4-kube-api-access-k5tss" (OuterVolumeSpecName: "kube-api-access-k5tss") pod "f61a3ff0-e6a6-463d-8803-ff49ba95d4f4" (UID: "f61a3ff0-e6a6-463d-8803-ff49ba95d4f4"). InnerVolumeSpecName "kube-api-access-k5tss". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.962376    2044 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ddmsg\" (UniqueName: \"kubernetes.io/projected/d93d54e8-7ff9-4034-a317-f6c97924ce18-kube-api-access-ddmsg\") on node \"addons-439000\" DevicePath \"\""
	Sep 17 17:08:55 addons-439000 kubelet[2044]: I0917 17:08:55.962430    2044 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k5tss\" (UniqueName: \"kubernetes.io/projected/f61a3ff0-e6a6-463d-8803-ff49ba95d4f4-kube-api-access-k5tss\") on node \"addons-439000\" DevicePath \"\""
	
	
	==> storage-provisioner [255c6730c943] <==
	I0917 16:56:26.020548       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 16:56:26.045929       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 16:56:26.045952       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 16:56:26.088719       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 16:56:26.088789       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-439000_f2ff792b-a9ce-48db-b205-ad26f080c412!
	I0917 16:56:26.089330       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f9cb7c3d-41ce-4b6b-a35a-5c4cf508cd81", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-439000_f2ff792b-a9ce-48db-b205-ad26f080c412 became leader
	I0917 16:56:26.189033       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-439000_f2ff792b-a9ce-48db-b205-ad26f080c412!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-439000 -n addons-439000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-439000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox hello-world-app-55bf9c44b4-kzwdk registry-66c9cd494c-zhs2b registry-proxy-5fb54
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-439000 describe pod busybox hello-world-app-55bf9c44b4-kzwdk registry-66c9cd494c-zhs2b registry-proxy-5fb54
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-439000 describe pod busybox hello-world-app-55bf9c44b4-kzwdk registry-66c9cd494c-zhs2b registry-proxy-5fb54: exit status 1 (50.886375ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-439000/192.168.105.2
	Start Time:       Tue, 17 Sep 2024 09:59:44 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hhn72 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hhn72:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m12s                   default-scheduler  Successfully assigned default/busybox to addons-439000
	  Warning  Failed     7m58s (x6 over 9m11s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m47s (x4 over 9m12s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m47s (x4 over 9m12s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m47s (x4 over 9m12s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m11s (x21 over 9m11s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             hello-world-app-55bf9c44b4-kzwdk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-439000/192.168.105.2
	Start Time:       Tue, 17 Sep 2024 10:08:54 -0700
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-td52g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-td52g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-kzwdk to addons-439000
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-66c9cd494c-zhs2b" not found
	Error from server (NotFound): pods "registry-proxy-5fb54" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-439000 describe pod busybox hello-world-app-55bf9c44b4-kzwdk registry-66c9cd494c-zhs2b registry-proxy-5fb54: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.38s)

                                                
                                    
x
+
TestCertOptions (10.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-437000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-437000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.957245958s)

                                                
                                                
-- stdout --
	* [cert-options-437000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-437000" primary control-plane node in "cert-options-437000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-437000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-437000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-437000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-437000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-437000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.637584ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-437000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-437000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-437000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-437000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-437000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-437000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.113917ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-437000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-437000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-437000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-437000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-437000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-17 10:42:14.602132 -0700 PDT m=+2813.626639959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-437000 -n cert-options-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-437000 -n cert-options-437000: exit status 7 (30.709416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-437000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-437000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-437000
--- FAIL: TestCertOptions (10.22s)

                                                
                                    
x
+
TestCertExpiration (195.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-767000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-767000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.12837325s)

                                                
                                                
-- stdout --
	* [cert-expiration-767000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-767000" primary control-plane node in "cert-expiration-767000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-767000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-767000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-767000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-767000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-767000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.223493167s)

                                                
                                                
-- stdout --
	* [cert-expiration-767000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-767000" primary control-plane node in "cert-expiration-767000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-767000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-767000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-767000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-767000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-767000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-767000" primary control-plane node in "cert-expiration-767000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-767000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-767000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-767000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-17 10:45:14.607336 -0700 PDT m=+2993.637409959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-767000 -n cert-expiration-767000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-767000 -n cert-expiration-767000: exit status 7 (64.626333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-767000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-767000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-767000
--- FAIL: TestCertExpiration (195.50s)

                                                
                                    
x
+
TestDockerFlags (10.33s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-981000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-981000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.097954875s)

                                                
                                                
-- stdout --
	* [docker-flags-981000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-981000" primary control-plane node in "docker-flags-981000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-981000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:41:54.187393    4654 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:41:54.187517    4654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:41:54.187521    4654 out.go:358] Setting ErrFile to fd 2...
	I0917 10:41:54.187523    4654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:41:54.187643    4654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:41:54.188721    4654 out.go:352] Setting JSON to false
	I0917 10:41:54.204680    4654 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4277,"bootTime":1726590637,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:41:54.204748    4654 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:41:54.210206    4654 out.go:177] * [docker-flags-981000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:41:54.218178    4654 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:41:54.218223    4654 notify.go:220] Checking for updates...
	I0917 10:41:54.225171    4654 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:41:54.228221    4654 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:41:54.231185    4654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:41:54.234158    4654 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:41:54.237213    4654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:41:54.240543    4654 config.go:182] Loaded profile config "force-systemd-flag-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:41:54.240610    4654 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:41:54.240658    4654 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:41:54.245197    4654 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:41:54.252173    4654 start.go:297] selected driver: qemu2
	I0917 10:41:54.252180    4654 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:41:54.252187    4654 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:41:54.254385    4654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:41:54.257169    4654 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:41:54.260308    4654 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0917 10:41:54.260334    4654 cni.go:84] Creating CNI manager for ""
	I0917 10:41:54.260362    4654 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:41:54.260366    4654 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:41:54.260392    4654 start.go:340] cluster config:
	{Name:docker-flags-981000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-981000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:41:54.264048    4654 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:54.271196    4654 out.go:177] * Starting "docker-flags-981000" primary control-plane node in "docker-flags-981000" cluster
	I0917 10:41:54.279189    4654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:41:54.279206    4654 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:41:54.279219    4654 cache.go:56] Caching tarball of preloaded images
	I0917 10:41:54.279287    4654 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:41:54.279300    4654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:41:54.279357    4654 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/docker-flags-981000/config.json ...
	I0917 10:41:54.279370    4654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/docker-flags-981000/config.json: {Name:mk88a3694823cb01e4e442deae6fba73d7c7e955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:41:54.279765    4654 start.go:360] acquireMachinesLock for docker-flags-981000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:41:54.279799    4654 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "docker-flags-981000"
	I0917 10:41:54.279809    4654 start.go:93] Provisioning new machine with config: &{Name:docker-flags-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-981000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:41:54.279838    4654 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:41:54.283224    4654 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:41:54.300496    4654 start.go:159] libmachine.API.Create for "docker-flags-981000" (driver="qemu2")
	I0917 10:41:54.300529    4654 client.go:168] LocalClient.Create starting
	I0917 10:41:54.300587    4654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:41:54.300620    4654 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:54.300628    4654 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:54.300665    4654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:41:54.300692    4654 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:54.300700    4654 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:54.301166    4654 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:41:54.462466    4654 main.go:141] libmachine: Creating SSH key...
	I0917 10:41:54.504181    4654 main.go:141] libmachine: Creating Disk image...
	I0917 10:41:54.504186    4654 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:41:54.504367    4654 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/disk.qcow2
	I0917 10:41:54.513686    4654 main.go:141] libmachine: STDOUT: 
	I0917 10:41:54.513701    4654 main.go:141] libmachine: STDERR: 
	I0917 10:41:54.513754    4654 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/disk.qcow2 +20000M
	I0917 10:41:54.521541    4654 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:41:54.521565    4654 main.go:141] libmachine: STDERR: 
	I0917 10:41:54.521584    4654 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/disk.qcow2
	I0917 10:41:54.521595    4654 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:41:54.521604    4654 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:41:54.521643    4654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:19:d3:27:12:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/disk.qcow2
	I0917 10:41:54.523317    4654 main.go:141] libmachine: STDOUT: 
	I0917 10:41:54.523330    4654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:41:54.523350    4654 client.go:171] duration metric: took 222.82225ms to LocalClient.Create
	I0917 10:41:56.525471    4654 start.go:128] duration metric: took 2.245680917s to createHost
	I0917 10:41:56.525546    4654 start.go:83] releasing machines lock for "docker-flags-981000", held for 2.245803209s
	W0917 10:41:56.525635    4654 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:56.542754    4654 out.go:177] * Deleting "docker-flags-981000" in qemu2 ...
	W0917 10:41:56.574182    4654 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:56.574200    4654 start.go:729] Will try again in 5 seconds ...
	I0917 10:42:01.576309    4654 start.go:360] acquireMachinesLock for docker-flags-981000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:42:01.673674    4654 start.go:364] duration metric: took 97.203375ms to acquireMachinesLock for "docker-flags-981000"
	I0917 10:42:01.673775    4654 start.go:93] Provisioning new machine with config: &{Name:docker-flags-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-981000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:42:01.674119    4654 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:42:01.686808    4654 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:42:01.735925    4654 start.go:159] libmachine.API.Create for "docker-flags-981000" (driver="qemu2")
	I0917 10:42:01.735987    4654 client.go:168] LocalClient.Create starting
	I0917 10:42:01.736115    4654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:42:01.736175    4654 main.go:141] libmachine: Decoding PEM data...
	I0917 10:42:01.736189    4654 main.go:141] libmachine: Parsing certificate...
	I0917 10:42:01.736251    4654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:42:01.736295    4654 main.go:141] libmachine: Decoding PEM data...
	I0917 10:42:01.736306    4654 main.go:141] libmachine: Parsing certificate...
	I0917 10:42:01.738302    4654 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:42:01.987509    4654 main.go:141] libmachine: Creating SSH key...
	I0917 10:42:02.175915    4654 main.go:141] libmachine: Creating Disk image...
	I0917 10:42:02.175922    4654 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:42:02.176117    4654 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/disk.qcow2
	I0917 10:42:02.185968    4654 main.go:141] libmachine: STDOUT: 
	I0917 10:42:02.185987    4654 main.go:141] libmachine: STDERR: 
	I0917 10:42:02.186045    4654 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/disk.qcow2 +20000M
	I0917 10:42:02.193989    4654 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:42:02.194015    4654 main.go:141] libmachine: STDERR: 
	I0917 10:42:02.194027    4654 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/disk.qcow2
	I0917 10:42:02.194032    4654 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:42:02.194039    4654 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:42:02.194082    4654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:91:61:4a:a0:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/docker-flags-981000/disk.qcow2
	I0917 10:42:02.195657    4654 main.go:141] libmachine: STDOUT: 
	I0917 10:42:02.195673    4654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:42:02.195685    4654 client.go:171] duration metric: took 459.704958ms to LocalClient.Create
	I0917 10:42:04.197595    4654 start.go:128] duration metric: took 2.523487458s to createHost
	I0917 10:42:04.197666    4654 start.go:83] releasing machines lock for "docker-flags-981000", held for 2.524035417s
	W0917 10:42:04.198016    4654 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-981000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-981000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:42:04.224420    4654 out.go:201] 
	W0917 10:42:04.228596    4654 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:42:04.228623    4654 out.go:270] * 
	* 
	W0917 10:42:04.231388    4654 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:42:04.241498    4654 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-981000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-981000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-981000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.264584ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-981000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-981000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-981000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-981000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-981000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-981000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-981000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-981000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-981000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.657084ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-981000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-981000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-981000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-981000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-981000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-981000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-17 10:42:04.381248 -0700 PDT m=+2803.405440876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-981000 -n docker-flags-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-981000 -n docker-flags-981000: exit status 7 (29.251833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-981000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-981000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-981000
--- FAIL: TestDockerFlags (10.33s)

                                                
                                    
x
+
TestForceSystemdFlag (10.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-388000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-388000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.984949042s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-388000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-388000" primary control-plane node in "force-systemd-flag-388000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-388000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:41:49.112678    4633 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:41:49.112811    4633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:41:49.112814    4633 out.go:358] Setting ErrFile to fd 2...
	I0917 10:41:49.112817    4633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:41:49.112947    4633 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:41:49.113998    4633 out.go:352] Setting JSON to false
	I0917 10:41:49.129864    4633 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4272,"bootTime":1726590637,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:41:49.129934    4633 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:41:49.136981    4633 out.go:177] * [force-systemd-flag-388000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:41:49.144898    4633 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:41:49.144961    4633 notify.go:220] Checking for updates...
	I0917 10:41:49.153909    4633 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:41:49.157857    4633 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:41:49.160915    4633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:41:49.163945    4633 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:41:49.166902    4633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:41:49.170188    4633 config.go:182] Loaded profile config "force-systemd-env-460000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:41:49.170262    4633 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:41:49.170316    4633 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:41:49.174878    4633 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:41:49.181883    4633 start.go:297] selected driver: qemu2
	I0917 10:41:49.181891    4633 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:41:49.181900    4633 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:41:49.184180    4633 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:41:49.187891    4633 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:41:49.190959    4633 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 10:41:49.190972    4633 cni.go:84] Creating CNI manager for ""
	I0917 10:41:49.190994    4633 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:41:49.190999    4633 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:41:49.191024    4633 start.go:340] cluster config:
	{Name:force-systemd-flag-388000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-388000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:41:49.194714    4633 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:49.202908    4633 out.go:177] * Starting "force-systemd-flag-388000" primary control-plane node in "force-systemd-flag-388000" cluster
	I0917 10:41:49.206866    4633 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:41:49.206886    4633 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:41:49.206896    4633 cache.go:56] Caching tarball of preloaded images
	I0917 10:41:49.206975    4633 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:41:49.206981    4633 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:41:49.207051    4633 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/force-systemd-flag-388000/config.json ...
	I0917 10:41:49.207063    4633 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/force-systemd-flag-388000/config.json: {Name:mkcdb1042045bd55b35dd6c645288d1855f58b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:41:49.207296    4633 start.go:360] acquireMachinesLock for force-systemd-flag-388000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:41:49.207333    4633 start.go:364] duration metric: took 29.958µs to acquireMachinesLock for "force-systemd-flag-388000"
	I0917 10:41:49.207345    4633 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-388000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-388000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:41:49.207370    4633 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:41:49.215871    4633 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:41:49.235198    4633 start.go:159] libmachine.API.Create for "force-systemd-flag-388000" (driver="qemu2")
	I0917 10:41:49.235229    4633 client.go:168] LocalClient.Create starting
	I0917 10:41:49.235306    4633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:41:49.235339    4633 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:49.235348    4633 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:49.235394    4633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:41:49.235423    4633 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:49.235436    4633 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:49.235812    4633 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:41:49.397333    4633 main.go:141] libmachine: Creating SSH key...
	I0917 10:41:49.551684    4633 main.go:141] libmachine: Creating Disk image...
	I0917 10:41:49.551693    4633 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:41:49.551898    4633 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/disk.qcow2
	I0917 10:41:49.561342    4633 main.go:141] libmachine: STDOUT: 
	I0917 10:41:49.561365    4633 main.go:141] libmachine: STDERR: 
	I0917 10:41:49.561425    4633 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/disk.qcow2 +20000M
	I0917 10:41:49.569302    4633 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:41:49.569318    4633 main.go:141] libmachine: STDERR: 
	I0917 10:41:49.569337    4633 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/disk.qcow2
	I0917 10:41:49.569342    4633 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:41:49.569355    4633 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:41:49.569384    4633 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:10:73:fb:fd:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/disk.qcow2
	I0917 10:41:49.571012    4633 main.go:141] libmachine: STDOUT: 
	I0917 10:41:49.571026    4633 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:41:49.571048    4633 client.go:171] duration metric: took 335.822375ms to LocalClient.Create
	I0917 10:41:51.573157    4633 start.go:128] duration metric: took 2.365842792s to createHost
	I0917 10:41:51.573260    4633 start.go:83] releasing machines lock for "force-systemd-flag-388000", held for 2.365982875s
	W0917 10:41:51.573375    4633 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:51.602696    4633 out.go:177] * Deleting "force-systemd-flag-388000" in qemu2 ...
	W0917 10:41:51.628128    4633 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:51.628158    4633 start.go:729] Will try again in 5 seconds ...
	I0917 10:41:56.630236    4633 start.go:360] acquireMachinesLock for force-systemd-flag-388000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:41:56.630728    4633 start.go:364] duration metric: took 387.708µs to acquireMachinesLock for "force-systemd-flag-388000"
	I0917 10:41:56.630861    4633 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-388000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-388000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:41:56.631143    4633 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:41:56.639812    4633 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:41:56.689925    4633 start.go:159] libmachine.API.Create for "force-systemd-flag-388000" (driver="qemu2")
	I0917 10:41:56.689973    4633 client.go:168] LocalClient.Create starting
	I0917 10:41:56.690082    4633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:41:56.690144    4633 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:56.690173    4633 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:56.690234    4633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:41:56.690281    4633 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:56.690292    4633 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:56.691175    4633 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:41:56.872114    4633 main.go:141] libmachine: Creating SSH key...
	I0917 10:41:56.999955    4633 main.go:141] libmachine: Creating Disk image...
	I0917 10:41:56.999961    4633 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:41:57.000156    4633 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/disk.qcow2
	I0917 10:41:57.009568    4633 main.go:141] libmachine: STDOUT: 
	I0917 10:41:57.009591    4633 main.go:141] libmachine: STDERR: 
	I0917 10:41:57.009656    4633 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/disk.qcow2 +20000M
	I0917 10:41:57.017504    4633 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:41:57.017518    4633 main.go:141] libmachine: STDERR: 
	I0917 10:41:57.017537    4633 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/disk.qcow2
	I0917 10:41:57.017545    4633 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:41:57.017553    4633 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:41:57.017581    4633 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:67:8a:e8:14:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-flag-388000/disk.qcow2
	I0917 10:41:57.019146    4633 main.go:141] libmachine: STDOUT: 
	I0917 10:41:57.019204    4633 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:41:57.019223    4633 client.go:171] duration metric: took 329.253166ms to LocalClient.Create
	I0917 10:41:59.021341    4633 start.go:128] duration metric: took 2.390235208s to createHost
	I0917 10:41:59.021395    4633 start.go:83] releasing machines lock for "force-systemd-flag-388000", held for 2.390715875s
	W0917 10:41:59.021806    4633 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-388000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-388000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:59.032224    4633 out.go:201] 
	W0917 10:41:59.041430    4633 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:41:59.041457    4633 out.go:270] * 
	* 
	W0917 10:41:59.044311    4633 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:41:59.055343    4633 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-388000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-388000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-388000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.467625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-388000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-388000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-388000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-17 10:41:59.147837 -0700 PDT m=+2798.171867751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-388000 -n force-systemd-flag-388000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-388000 -n force-systemd-flag-388000: exit status 7 (32.835708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-388000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-388000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-388000
--- FAIL: TestForceSystemdFlag (10.17s)

                                                
                                    
x
+
TestForceSystemdEnv (10.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-460000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-460000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.296664292s)

                                                
                                                
-- stdout --
	* [force-systemd-env-460000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-460000" primary control-plane node in "force-systemd-env-460000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-460000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:41:43.699188    4598 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:41:43.699303    4598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:41:43.699307    4598 out.go:358] Setting ErrFile to fd 2...
	I0917 10:41:43.699309    4598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:41:43.699431    4598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:41:43.700588    4598 out.go:352] Setting JSON to false
	I0917 10:41:43.717597    4598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4266,"bootTime":1726590637,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:41:43.717676    4598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:41:43.724761    4598 out.go:177] * [force-systemd-env-460000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:41:43.734626    4598 notify.go:220] Checking for updates...
	I0917 10:41:43.739592    4598 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:41:43.742620    4598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:41:43.745551    4598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:41:43.748561    4598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:41:43.751587    4598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:41:43.754594    4598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0917 10:41:43.757886    4598 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:41:43.757928    4598 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:41:43.762605    4598 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:41:43.769558    4598 start.go:297] selected driver: qemu2
	I0917 10:41:43.769564    4598 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:41:43.769569    4598 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:41:43.771855    4598 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:41:43.775563    4598 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:41:43.778645    4598 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 10:41:43.778658    4598 cni.go:84] Creating CNI manager for ""
	I0917 10:41:43.778680    4598 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:41:43.778689    4598 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:41:43.778711    4598 start.go:340] cluster config:
	{Name:force-systemd-env-460000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-460000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:41:43.782351    4598 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:43.789574    4598 out.go:177] * Starting "force-systemd-env-460000" primary control-plane node in "force-systemd-env-460000" cluster
	I0917 10:41:43.793566    4598 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:41:43.793590    4598 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:41:43.793599    4598 cache.go:56] Caching tarball of preloaded images
	I0917 10:41:43.793670    4598 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:41:43.793676    4598 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:41:43.793740    4598 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/force-systemd-env-460000/config.json ...
	I0917 10:41:43.793750    4598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/force-systemd-env-460000/config.json: {Name:mk6b3d7d0925c0572d382ecd0dc43ab7cc614b4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:41:43.794000    4598 start.go:360] acquireMachinesLock for force-systemd-env-460000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:41:43.794037    4598 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "force-systemd-env-460000"
	I0917 10:41:43.794048    4598 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-460000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-460000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:41:43.794079    4598 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:41:43.799607    4598 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:41:43.817103    4598 start.go:159] libmachine.API.Create for "force-systemd-env-460000" (driver="qemu2")
	I0917 10:41:43.817128    4598 client.go:168] LocalClient.Create starting
	I0917 10:41:43.817187    4598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:41:43.817216    4598 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:43.817226    4598 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:43.817261    4598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:41:43.817283    4598 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:43.817292    4598 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:43.817622    4598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:41:43.980063    4598 main.go:141] libmachine: Creating SSH key...
	I0917 10:41:44.030365    4598 main.go:141] libmachine: Creating Disk image...
	I0917 10:41:44.030371    4598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:41:44.030547    4598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/disk.qcow2
	I0917 10:41:44.039986    4598 main.go:141] libmachine: STDOUT: 
	I0917 10:41:44.040007    4598 main.go:141] libmachine: STDERR: 
	I0917 10:41:44.040080    4598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/disk.qcow2 +20000M
	I0917 10:41:44.048337    4598 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:41:44.048360    4598 main.go:141] libmachine: STDERR: 
	I0917 10:41:44.048383    4598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/disk.qcow2
	I0917 10:41:44.048389    4598 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:41:44.048401    4598 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:41:44.048437    4598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:14:ef:86:98:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/disk.qcow2
	I0917 10:41:44.050039    4598 main.go:141] libmachine: STDOUT: 
	I0917 10:41:44.050055    4598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:41:44.050074    4598 client.go:171] duration metric: took 232.946875ms to LocalClient.Create
	I0917 10:41:46.052128    4598 start.go:128] duration metric: took 2.2580885s to createHost
	I0917 10:41:46.052147    4598 start.go:83] releasing machines lock for "force-systemd-env-460000", held for 2.258174625s
	W0917 10:41:46.052160    4598 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:46.057452    4598 out.go:177] * Deleting "force-systemd-env-460000" in qemu2 ...
	W0917 10:41:46.072866    4598 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:46.072880    4598 start.go:729] Will try again in 5 seconds ...
	I0917 10:41:51.074932    4598 start.go:360] acquireMachinesLock for force-systemd-env-460000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:41:51.573542    4598 start.go:364] duration metric: took 498.461834ms to acquireMachinesLock for "force-systemd-env-460000"
	I0917 10:41:51.573644    4598 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-460000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-460000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:41:51.573908    4598 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:41:51.588691    4598 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:41:51.636925    4598 start.go:159] libmachine.API.Create for "force-systemd-env-460000" (driver="qemu2")
	I0917 10:41:51.636967    4598 client.go:168] LocalClient.Create starting
	I0917 10:41:51.637096    4598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:41:51.637165    4598 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:51.637182    4598 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:51.637255    4598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:41:51.637306    4598 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:51.637322    4598 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:51.637918    4598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:41:51.814028    4598 main.go:141] libmachine: Creating SSH key...
	I0917 10:41:51.900608    4598 main.go:141] libmachine: Creating Disk image...
	I0917 10:41:51.900613    4598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:41:51.900799    4598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/disk.qcow2
	I0917 10:41:51.910191    4598 main.go:141] libmachine: STDOUT: 
	I0917 10:41:51.910215    4598 main.go:141] libmachine: STDERR: 
	I0917 10:41:51.910301    4598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/disk.qcow2 +20000M
	I0917 10:41:51.918067    4598 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:41:51.918083    4598 main.go:141] libmachine: STDERR: 
	I0917 10:41:51.918095    4598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/disk.qcow2
	I0917 10:41:51.918100    4598 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:41:51.918111    4598 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:41:51.918138    4598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:4f:a5:97:72:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/force-systemd-env-460000/disk.qcow2
	I0917 10:41:51.919702    4598 main.go:141] libmachine: STDOUT: 
	I0917 10:41:51.919716    4598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:41:51.919730    4598 client.go:171] duration metric: took 282.76525ms to LocalClient.Create
	I0917 10:41:53.921907    4598 start.go:128] duration metric: took 2.348009s to createHost
	I0917 10:41:53.922004    4598 start.go:83] releasing machines lock for "force-systemd-env-460000", held for 2.348493458s
	W0917 10:41:53.922398    4598 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-460000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-460000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:53.934873    4598 out.go:201] 
	W0917 10:41:53.939964    4598 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:41:53.940012    4598 out.go:270] * 
	* 
	W0917 10:41:53.943034    4598 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:41:53.951895    4598 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-460000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-460000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-460000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.888542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-460000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-460000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-460000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-17 10:41:54.046108 -0700 PDT m=+2793.069980501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-460000 -n force-systemd-env-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-460000 -n force-systemd-env-460000: exit status 7 (34.009667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-460000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-460000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-460000
--- FAIL: TestForceSystemdEnv (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (39.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-334000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-334000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-xjcfd" [512c2ec2-9e94-4ad6-8b57-19db7c44aad4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-xjcfd" [512c2ec2-9e94-4ad6-8b57-19db7c44aad4] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.010323792s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30486
functional_test.go:1661: error fetching http://192.168.105.4:30486: Get "http://192.168.105.4:30486": dial tcp 192.168.105.4:30486: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30486: Get "http://192.168.105.4:30486": dial tcp 192.168.105.4:30486: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30486: Get "http://192.168.105.4:30486": dial tcp 192.168.105.4:30486: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30486: Get "http://192.168.105.4:30486": dial tcp 192.168.105.4:30486: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30486: Get "http://192.168.105.4:30486": dial tcp 192.168.105.4:30486: connect: connection refused
E0917 10:14:11.644936    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:30486: Get "http://192.168.105.4:30486": dial tcp 192.168.105.4:30486: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30486: Get "http://192.168.105.4:30486": dial tcp 192.168.105.4:30486: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30486: Get "http://192.168.105.4:30486": dial tcp 192.168.105.4:30486: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30486: Get "http://192.168.105.4:30486": dial tcp 192.168.105.4:30486: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-334000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-xjcfd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-334000/192.168.105.4
Start Time:       Tue, 17 Sep 2024 10:13:53 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://d42c7a08bd302231019924f31cef4ebbadf5fe6d9aca3849c5e77818d34f372f
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 17 Sep 2024 10:14:12 -0700
Finished:     Tue, 17 Sep 2024 10:14:12 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xh2kk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xh2kk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  38s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-xjcfd to functional-334000
Normal   Pulling    38s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     35s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 2.629s (2.629s including waiting). Image size: 84957542 bytes.
Normal   Created    19s (x3 over 35s)  kubelet            Created container echoserver-arm
Normal   Started    19s (x3 over 35s)  kubelet            Started container echoserver-arm
Normal   Pulled     19s (x2 over 34s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    6s (x4 over 33s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-xjcfd_default(512c2ec2-9e94-4ad6-8b57-19db7c44aad4)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-334000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-334000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.180.197
IPs:                      10.103.180.197
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30486/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-334000 -n functional-334000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-arm64 -p functional-334000 logs -n 25: (1.003805167s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-334000                                                                                                 | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port51347177/001:/mount-9p        |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh findmnt                                                                                        | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh findmnt                                                                                        | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT | 17 Sep 24 10:14 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh -- ls                                                                                          | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT | 17 Sep 24 10:14 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh cat                                                                                            | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT | 17 Sep 24 10:14 PDT |
	|           | /mount-9p/test-1726593254664749000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh stat                                                                                           | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT | 17 Sep 24 10:14 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh stat                                                                                           | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT | 17 Sep 24 10:14 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh sudo                                                                                           | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT | 17 Sep 24 10:14 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh findmnt                                                                                        | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT | 17 Sep 24 10:14 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-334000                                                                                                 | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3161448144/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh -- ls                                                                                          | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT | 17 Sep 24 10:14 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh sudo                                                                                           | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-334000                                                                                                 | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2189418960/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-334000                                                                                                 | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2189418960/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-334000                                                                                                 | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2189418960/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh findmnt                                                                                        | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh findmnt                                                                                        | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh findmnt                                                                                        | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT | 17 Sep 24 10:14 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh findmnt                                                                                        | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT | 17 Sep 24 10:14 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-334000 ssh findmnt                                                                                        | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT | 17 Sep 24 10:14 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-334000                                                                                                 | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-334000                                                                                                 | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-334000                                                                                                 | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-334000 --dry-run                                                                                       | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-334000 | jenkins | v1.34.0 | 17 Sep 24 10:14 PDT |                     |
	|           | -p functional-334000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 10:14:24
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 10:14:24.025965    2933 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:14:24.026097    2933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:14:24.026101    2933 out.go:358] Setting ErrFile to fd 2...
	I0917 10:14:24.026104    2933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:14:24.026216    2933 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:14:24.027303    2933 out.go:352] Setting JSON to false
	I0917 10:14:24.043111    2933 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2627,"bootTime":1726590637,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:14:24.043222    2933 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:14:24.045707    2933 out.go:177] * [functional-334000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:14:24.053061    2933 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:14:24.053093    2933 notify.go:220] Checking for updates...
	I0917 10:14:24.059994    2933 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:14:24.062999    2933 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:14:24.066014    2933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:14:24.068954    2933 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:14:24.071991    2933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:14:24.075297    2933 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:14:24.075544    2933 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:14:24.079991    2933 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:14:24.087036    2933 start.go:297] selected driver: qemu2
	I0917 10:14:24.087046    2933 start.go:901] validating driver "qemu2" against &{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:14:24.087097    2933 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:14:24.089290    2933 cni.go:84] Creating CNI manager for ""
	I0917 10:14:24.089318    2933 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:14:24.089355    2933 start.go:340] cluster config:
	{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-334000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:14:24.099977    2933 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 17 17:14:23 functional-334000 dockerd[5666]: time="2024-09-17T17:14:23.447588278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:14:23 functional-334000 dockerd[5666]: time="2024-09-17T17:14:23.447740825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:14:23 functional-334000 dockerd[5666]: time="2024-09-17T17:14:23.447767243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:14:23 functional-334000 dockerd[5666]: time="2024-09-17T17:14:23.447817745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:14:23 functional-334000 dockerd[5666]: time="2024-09-17T17:14:23.479278684Z" level=info msg="shim disconnected" id=ff5311862f5502887b4334864c28b8041b7fd327bb20ec6b1fff6e7f2c8531b6 namespace=moby
	Sep 17 17:14:23 functional-334000 dockerd[5660]: time="2024-09-17T17:14:23.479483859Z" level=info msg="ignoring event" container=ff5311862f5502887b4334864c28b8041b7fd327bb20ec6b1fff6e7f2c8531b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:14:23 functional-334000 dockerd[5666]: time="2024-09-17T17:14:23.479525235Z" level=warning msg="cleaning up after shim disconnected" id=ff5311862f5502887b4334864c28b8041b7fd327bb20ec6b1fff6e7f2c8531b6 namespace=moby
	Sep 17 17:14:23 functional-334000 dockerd[5666]: time="2024-09-17T17:14:23.479530735Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:14:24 functional-334000 dockerd[5666]: time="2024-09-17T17:14:24.955807739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:14:24 functional-334000 dockerd[5666]: time="2024-09-17T17:14:24.956007997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:14:24 functional-334000 dockerd[5666]: time="2024-09-17T17:14:24.956216213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:14:24 functional-334000 dockerd[5666]: time="2024-09-17T17:14:24.956220880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:14:24 functional-334000 dockerd[5666]: time="2024-09-17T17:14:24.956239131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:14:24 functional-334000 dockerd[5666]: time="2024-09-17T17:14:24.956248423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:14:24 functional-334000 dockerd[5666]: time="2024-09-17T17:14:24.956288841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:14:24 functional-334000 dockerd[5666]: time="2024-09-17T17:14:24.956350052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:14:25 functional-334000 cri-dockerd[5924]: time="2024-09-17T17:14:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/83734e6676cb797c0d69f01ea17908dd12ce706670b72cab5a575fbe9e5328a3/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 17:14:25 functional-334000 cri-dockerd[5924]: time="2024-09-17T17:14:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/724761df76dd28f6e8a993ab81701063af0e56a6cdc55d0f92e4b56c8a38a7d8/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 17:14:25 functional-334000 dockerd[5660]: time="2024-09-17T17:14:25.252370878Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 17 17:14:27 functional-334000 cri-dockerd[5924]: time="2024-09-17T17:14:27Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 17 17:14:27 functional-334000 dockerd[5666]: time="2024-09-17T17:14:27.158783645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:14:27 functional-334000 dockerd[5666]: time="2024-09-17T17:14:27.158818396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:14:27 functional-334000 dockerd[5666]: time="2024-09-17T17:14:27.158837772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:14:27 functional-334000 dockerd[5666]: time="2024-09-17T17:14:27.158869565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:14:27 functional-334000 dockerd[5660]: time="2024-09-17T17:14:27.292617461Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	ef74dc642ff19       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   4 seconds ago        Running             dashboard-metrics-scraper   0                   83734e6676cb7       dashboard-metrics-scraper-c5db448b4-f8dxv
	ff5311862f550       72565bf5bbedf                                                                                          8 seconds ago        Exited              echoserver-arm              2                   08c71708010e7       hello-node-64b4f8f9ff-sjwwb
	c877e9fc8e51f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    14 seconds ago       Exited              mount-munger                0                   978db2b69cae9       busybox-mount
	d42c7a08bd302       72565bf5bbedf                                                                                          19 seconds ago       Exited              echoserver-arm              2                   d0f6a529915cd       hello-node-connect-65d86f57f4-xjcfd
	b2989b05846cf       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                          31 seconds ago       Running             myfrontend                  0                   4b8bd4b1e5b19       sp-pod
	4a3d8f25aeb08       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                          46 seconds ago       Running             nginx                       0                   b0f6c8689d133       nginx-svc
	b7a337c447d61       2f6c962e7b831                                                                                          About a minute ago   Running             coredns                     2                   99de7cbc494e8       coredns-7c65d6cfc9-dhwmm
	d1b02a9ddd8c2       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         2                   e2fef03444662       storage-provisioner
	434c9e5fa693c       24a140c548c07                                                                                          About a minute ago   Running             kube-proxy                  2                   4fc76a57401cd       kube-proxy-7mmtt
	4c6ab722e7225       27e3830e14027                                                                                          About a minute ago   Running             etcd                        2                   274bdb0f6bc25       etcd-functional-334000
	8c9e21be1aaf6       279f381cb3736                                                                                          About a minute ago   Running             kube-controller-manager     2                   08793377935a9       kube-controller-manager-functional-334000
	4f002e0264c2d       7f8aa378bb47d                                                                                          About a minute ago   Running             kube-scheduler              2                   7c86101ab737f       kube-scheduler-functional-334000
	31e70c752c7be       d3f53a98c0a9d                                                                                          About a minute ago   Running             kube-apiserver              0                   c6247a24df1af       kube-apiserver-functional-334000
	217ef30865ffd       2f6c962e7b831                                                                                          About a minute ago   Exited              coredns                     1                   1d60915b4d0f7       coredns-7c65d6cfc9-dhwmm
	c9b1a8e0cf07f       ba04bb24b9575                                                                                          About a minute ago   Exited              storage-provisioner         1                   a51f4229195b3       storage-provisioner
	23b73f7fb490c       24a140c548c07                                                                                          About a minute ago   Exited              kube-proxy                  1                   335f5adf681a7       kube-proxy-7mmtt
	c961fe959dee6       7f8aa378bb47d                                                                                          2 minutes ago        Exited              kube-scheduler              1                   f759779e06432       kube-scheduler-functional-334000
	9fb406e1701e0       279f381cb3736                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   6913f64c29970       kube-controller-manager-functional-334000
	d09f59178aa1a       27e3830e14027                                                                                          2 minutes ago        Exited              etcd                        1                   cd41bc41e1697       etcd-functional-334000
	
	
	==> coredns [217ef30865ff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52160 - 32637 "HINFO IN 4510096759390707489.7904953572082731128. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010845278s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b7a337c447d6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37071 - 10207 "HINFO IN 7738639131008857789.4195433268060808568. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009316425s
	[INFO] 10.244.0.1:12985 - 51013 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000114671s
	[INFO] 10.244.0.1:34902 - 56434 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000260261s
	[INFO] 10.244.0.1:18928 - 48277 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000030543s
	[INFO] 10.244.0.1:14009 - 54250 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001180506s
	[INFO] 10.244.0.1:21165 - 57303 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.00007617s
	[INFO] 10.244.0.1:56861 - 55958 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000105629s
	
	
	==> describe nodes <==
	Name:               functional-334000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-334000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=functional-334000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T10_12_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:11:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-334000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:14:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:14:17 +0000   Tue, 17 Sep 2024 17:11:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:14:17 +0000   Tue, 17 Sep 2024 17:11:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:14:17 +0000   Tue, 17 Sep 2024 17:11:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:14:17 +0000   Tue, 17 Sep 2024 17:12:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-334000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 42e3f38bbad245ea969a95bbf2cfb434
	  System UUID:                42e3f38bbad245ea969a95bbf2cfb434
	  Boot ID:                    fd3d7afb-a94f-4f06-8e60-eedabe24c508
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-sjwwb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  default                     hello-node-connect-65d86f57f4-xjcfd          0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 coredns-7c65d6cfc9-dhwmm                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m26s
	  kube-system                 etcd-functional-334000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m32s
	  kube-system                 kube-apiserver-functional-334000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-controller-manager-functional-334000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-7mmtt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-functional-334000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-f8dxv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-l227m        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m26s                kube-proxy       
	  Normal  Starting                 73s                  kube-proxy       
	  Normal  Starting                 118s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m32s                kubelet          Node functional-334000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m32s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m32s                kubelet          Node functional-334000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m32s                kubelet          Node functional-334000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m32s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m28s                node-controller  Node functional-334000 event: Registered Node functional-334000 in Controller
	  Normal  NodeReady                2m28s                kubelet          Node functional-334000 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node functional-334000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node functional-334000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node functional-334000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           116s                 node-controller  Node functional-334000 event: Registered Node functional-334000 in Controller
	  Normal  Starting                 78s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)    kubelet          Node functional-334000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)    kubelet          Node functional-334000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)    kubelet          Node functional-334000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           71s                  node-controller  Node functional-334000 event: Registered Node functional-334000 in Controller
	
	
	==> dmesg <==
	[ +14.146944] systemd-fstab-generator[4733]: Ignoring "noauto" option for root device
	[  +0.057544] kauditd_printk_skb: 35 callbacks suppressed
	[ +11.685996] systemd-fstab-generator[5179]: Ignoring "noauto" option for root device
	[  +0.051503] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.093954] systemd-fstab-generator[5213]: Ignoring "noauto" option for root device
	[  +0.083408] systemd-fstab-generator[5225]: Ignoring "noauto" option for root device
	[  +0.100835] systemd-fstab-generator[5239]: Ignoring "noauto" option for root device
	[Sep17 17:13] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.300923] systemd-fstab-generator[5873]: Ignoring "noauto" option for root device
	[  +0.078581] systemd-fstab-generator[5885]: Ignoring "noauto" option for root device
	[  +0.091556] systemd-fstab-generator[5897]: Ignoring "noauto" option for root device
	[  +0.083233] systemd-fstab-generator[5912]: Ignoring "noauto" option for root device
	[  +0.218917] systemd-fstab-generator[6083]: Ignoring "noauto" option for root device
	[  +1.293527] systemd-fstab-generator[6209]: Ignoring "noauto" option for root device
	[  +0.955545] kauditd_printk_skb: 164 callbacks suppressed
	[  +5.810306] kauditd_printk_skb: 66 callbacks suppressed
	[ +12.689882] systemd-fstab-generator[7238]: Ignoring "noauto" option for root device
	[  +5.148608] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.098737] kauditd_printk_skb: 21 callbacks suppressed
	[  +9.898856] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.922193] kauditd_printk_skb: 25 callbacks suppressed
	[Sep17 17:14] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.035298] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.197329] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.773259] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [4c6ab722e722] <==
	{"level":"info","ts":"2024-09-17T17:13:14.688950Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:13:14.688971Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:13:14.690344Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:13:14.694695Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T17:13:14.694755Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-17T17:13:14.694765Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-17T17:13:14.694846Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T17:13:14.694885Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T17:13:16.036460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:13:16.036604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:13:16.036694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-17T17:13:16.037192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-17T17:13:16.037217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-17T17:13:16.037247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-17T17:13:16.037473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-17T17:13:16.042307Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-334000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T17:13:16.042625Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:13:16.043074Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T17:13:16.043256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T17:13:16.042743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:13:16.045432Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:13:16.046031Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:13:16.047835Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-17T17:13:16.049339Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T17:14:31.777405Z","caller":"traceutil/trace.go:171","msg":"trace[873997722] transaction","detail":"{read_only:false; response_revision:875; number_of_response:1; }","duration":"191.45596ms","start":"2024-09-17T17:14:31.585940Z","end":"2024-09-17T17:14:31.777396Z","steps":["trace[873997722] 'process raft request'  (duration: 191.398874ms)"],"step_count":1}
	
	
	==> etcd [d09f59178aa1] <==
	{"level":"info","ts":"2024-09-17T17:12:31.518382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:12:31.518445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-17T17:12:31.518480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-17T17:12:31.518530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-17T17:12:31.520948Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-334000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T17:12:31.521161Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:12:31.521859Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:12:31.523385Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:12:31.523630Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T17:12:31.523803Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T17:12:31.524907Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:12:31.526741Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-17T17:12:31.529530Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	2024/09/17 17:12:59 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-09-17T17:12:59.203298Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-17T17:12:59.203314Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-334000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-17T17:12:59.203345Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T17:12:59.203412Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/09/17 17:12:59 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:12:59.231030Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T17:12:59.231068Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T17:12:59.231092Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-17T17:12:59.232287Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-17T17:12:59.232326Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-17T17:12:59.232334Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-334000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 17:14:31 up 2 min,  0 users,  load average: 0.97, 0.55, 0.22
	Linux functional-334000 5.10.207 #1 SMP PREEMPT Mon Sep 16 12:01:57 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [31e70c752c7b] <==
	I0917 17:13:16.665392       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 17:13:16.665475       1 aggregator.go:171] initial CRD sync complete...
	I0917 17:13:16.665505       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 17:13:16.665514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 17:13:16.665520       1 cache.go:39] Caches are synced for autoregister controller
	I0917 17:13:16.666815       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0917 17:13:16.667860       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 17:13:16.703436       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 17:13:17.564968       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 17:13:17.668590       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0917 17:13:17.669241       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 17:13:17.976890       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 17:13:17.983012       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 17:13:17.993630       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 17:13:18.002161       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 17:13:18.004030       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 17:13:20.060580       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 17:13:37.902281       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.252.54"}
	I0917 17:13:42.653481       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.139.47"}
	I0917 17:13:53.163278       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0917 17:13:53.205318       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.180.197"}
	I0917 17:14:06.446760       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.70.90"}
	I0917 17:14:24.551082       1 controller.go:615] quota admission added evaluator for: namespaces
	I0917 17:14:24.626635       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.73.125"}
	I0917 17:14:24.635988       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.137.55"}
	
	
	==> kube-controller-manager [8c9e21be1aaf] <==
	I0917 17:14:07.353237       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="40.334µs"
	I0917 17:14:08.389764       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="40.543µs"
	I0917 17:14:13.497122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="27.167µs"
	I0917 17:14:17.701882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-334000"
	I0917 17:14:23.417938       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="22.459µs"
	I0917 17:14:23.643159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="26.876µs"
	I0917 17:14:24.580722       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="12.009288ms"
	E0917 17:14:24.580742       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 17:14:24.585140       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.486271ms"
	E0917 17:14:24.585162       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 17:14:24.586282       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.302454ms"
	E0917 17:14:24.586296       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 17:14:24.591105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.312501ms"
	E0917 17:14:24.591125       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 17:14:24.591214       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="3.745516ms"
	E0917 17:14:24.591227       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 17:14:24.604081       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.999249ms"
	I0917 17:14:24.611994       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.74202ms"
	I0917 17:14:24.612149       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.050972ms"
	I0917 17:14:24.612175       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.292µs"
	I0917 17:14:24.618193       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.137566ms"
	I0917 17:14:24.618272       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="37.168µs"
	I0917 17:14:25.425597       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="74.211µs"
	I0917 17:14:27.694047       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.921973ms"
	I0917 17:14:27.694457       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="42.043µs"
	
	
	==> kube-controller-manager [9fb406e1701e] <==
	I0917 17:12:35.374793       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0917 17:12:35.374800       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0917 17:12:35.395722       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0917 17:12:35.395766       1 shared_informer.go:320] Caches are synced for cronjob
	I0917 17:12:35.395779       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0917 17:12:35.395809       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0917 17:12:35.395919       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0917 17:12:35.396830       1 shared_informer.go:320] Caches are synced for crt configmap
	I0917 17:12:35.396833       1 shared_informer.go:320] Caches are synced for service account
	I0917 17:12:35.396838       1 shared_informer.go:320] Caches are synced for stateful set
	I0917 17:12:35.450196       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0917 17:12:35.499025       1 shared_informer.go:320] Caches are synced for deployment
	I0917 17:12:35.545234       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0917 17:12:35.545345       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="51.05µs"
	I0917 17:12:35.548405       1 shared_informer.go:320] Caches are synced for HPA
	I0917 17:12:35.549236       1 shared_informer.go:320] Caches are synced for disruption
	I0917 17:12:35.551529       1 shared_informer.go:320] Caches are synced for endpoint
	I0917 17:12:35.597934       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 17:12:35.598161       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 17:12:35.645763       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0917 17:12:35.986171       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.915219ms"
	I0917 17:12:35.986442       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.009µs"
	I0917 17:12:36.011469       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 17:12:36.094289       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 17:12:36.094334       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [23b73f7fb490] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:12:33.140961       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:12:33.144420       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0917 17:12:33.144457       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:12:33.153115       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:12:33.153130       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:12:33.153142       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:12:33.153719       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:12:33.153885       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:12:33.153893       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:12:33.154448       1 config.go:199] "Starting service config controller"
	I0917 17:12:33.154461       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:12:33.154504       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:12:33.154510       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:12:33.154705       1 config.go:328] "Starting node config controller"
	I0917 17:12:33.154709       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:12:33.255426       1 shared_informer.go:320] Caches are synced for node config
	I0917 17:12:33.255433       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:12:33.255461       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [434c9e5fa693] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:13:17.932350       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:13:17.958589       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0917 17:13:17.958619       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:13:17.976047       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:13:17.976072       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:13:17.976086       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:13:17.976732       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:13:17.976833       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:13:17.976844       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:13:17.977492       1 config.go:199] "Starting service config controller"
	I0917 17:13:17.977506       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:13:17.977517       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:13:17.977522       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:13:17.978021       1 config.go:328] "Starting node config controller"
	I0917 17:13:17.978031       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:13:18.078180       1 shared_informer.go:320] Caches are synced for node config
	I0917 17:13:18.078200       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:13:18.078211       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4f002e0264c2] <==
	I0917 17:13:14.918674       1 serving.go:386] Generated self-signed cert in-memory
	W0917 17:13:16.590458       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 17:13:16.590501       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 17:13:16.590531       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 17:13:16.590545       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 17:13:16.611623       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 17:13:16.611722       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:13:16.612839       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 17:13:16.612872       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 17:13:16.613099       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 17:13:16.613819       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 17:13:16.713304       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c961fe959dee] <==
	I0917 17:12:30.675897       1 serving.go:386] Generated self-signed cert in-memory
	W0917 17:12:32.041646       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 17:12:32.041774       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 17:12:32.041793       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 17:12:32.041800       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 17:12:32.058918       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 17:12:32.058934       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:12:32.059896       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 17:12:32.060060       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 17:12:32.060104       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 17:12:32.060122       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 17:12:32.162534       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 17:12:59.191879       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0917 17:12:59.191908       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0917 17:12:59.191971       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 17:14:13 functional-334000 kubelet[6216]: I0917 17:14:13.481092    6216 scope.go:117] "RemoveContainer" containerID="a3128214e305b9c4d73a71bdfc9a070ac5eea42cb04ac2af77b7f0039fe13e53"
	Sep 17 17:14:13 functional-334000 kubelet[6216]: I0917 17:14:13.481303    6216 scope.go:117] "RemoveContainer" containerID="d42c7a08bd302231019924f31cef4ebbadf5fe6d9aca3849c5e77818d34f372f"
	Sep 17 17:14:13 functional-334000 kubelet[6216]: E0917 17:14:13.481392    6216 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-xjcfd_default(512c2ec2-9e94-4ad6-8b57-19db7c44aad4)\"" pod="default/hello-node-connect-65d86f57f4-xjcfd" podUID="512c2ec2-9e94-4ad6-8b57-19db7c44aad4"
	Sep 17 17:14:13 functional-334000 kubelet[6216]: I0917 17:14:13.506864    6216 scope.go:117] "RemoveContainer" containerID="4e5813acab55528ff435d8556eb8b0659e1ed8c7bdd932b8aa0561525455b33d"
	Sep 17 17:14:15 functional-334000 kubelet[6216]: I0917 17:14:15.935657    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/853a7577-92f7-4f8d-8518-71d57a9f6690-test-volume\") pod \"busybox-mount\" (UID: \"853a7577-92f7-4f8d-8518-71d57a9f6690\") " pod="default/busybox-mount"
	Sep 17 17:14:15 functional-334000 kubelet[6216]: I0917 17:14:15.935707    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjsgw\" (UniqueName: \"kubernetes.io/projected/853a7577-92f7-4f8d-8518-71d57a9f6690-kube-api-access-zjsgw\") pod \"busybox-mount\" (UID: \"853a7577-92f7-4f8d-8518-71d57a9f6690\") " pod="default/busybox-mount"
	Sep 17 17:14:19 functional-334000 kubelet[6216]: I0917 17:14:19.769887    6216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjsgw\" (UniqueName: \"kubernetes.io/projected/853a7577-92f7-4f8d-8518-71d57a9f6690-kube-api-access-zjsgw\") pod \"853a7577-92f7-4f8d-8518-71d57a9f6690\" (UID: \"853a7577-92f7-4f8d-8518-71d57a9f6690\") "
	Sep 17 17:14:19 functional-334000 kubelet[6216]: I0917 17:14:19.769907    6216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/853a7577-92f7-4f8d-8518-71d57a9f6690-test-volume\") pod \"853a7577-92f7-4f8d-8518-71d57a9f6690\" (UID: \"853a7577-92f7-4f8d-8518-71d57a9f6690\") "
	Sep 17 17:14:19 functional-334000 kubelet[6216]: I0917 17:14:19.769932    6216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853a7577-92f7-4f8d-8518-71d57a9f6690-test-volume" (OuterVolumeSpecName: "test-volume") pod "853a7577-92f7-4f8d-8518-71d57a9f6690" (UID: "853a7577-92f7-4f8d-8518-71d57a9f6690"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 17:14:19 functional-334000 kubelet[6216]: I0917 17:14:19.772969    6216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853a7577-92f7-4f8d-8518-71d57a9f6690-kube-api-access-zjsgw" (OuterVolumeSpecName: "kube-api-access-zjsgw") pod "853a7577-92f7-4f8d-8518-71d57a9f6690" (UID: "853a7577-92f7-4f8d-8518-71d57a9f6690"). InnerVolumeSpecName "kube-api-access-zjsgw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:14:19 functional-334000 kubelet[6216]: I0917 17:14:19.871013    6216 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zjsgw\" (UniqueName: \"kubernetes.io/projected/853a7577-92f7-4f8d-8518-71d57a9f6690-kube-api-access-zjsgw\") on node \"functional-334000\" DevicePath \"\""
	Sep 17 17:14:19 functional-334000 kubelet[6216]: I0917 17:14:19.871031    6216 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/853a7577-92f7-4f8d-8518-71d57a9f6690-test-volume\") on node \"functional-334000\" DevicePath \"\""
	Sep 17 17:14:20 functional-334000 kubelet[6216]: I0917 17:14:20.607629    6216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="978db2b69cae9fdabe6ab59dad7bb12e77ece172fe66caa7a26a58fab1dfe8f6"
	Sep 17 17:14:23 functional-334000 kubelet[6216]: I0917 17:14:23.411646    6216 scope.go:117] "RemoveContainer" containerID="33a37756d6833a5f7faada5ef1e5c464ccc036d4934279cc73dbaedc4a743957"
	Sep 17 17:14:23 functional-334000 kubelet[6216]: I0917 17:14:23.637549    6216 scope.go:117] "RemoveContainer" containerID="33a37756d6833a5f7faada5ef1e5c464ccc036d4934279cc73dbaedc4a743957"
	Sep 17 17:14:23 functional-334000 kubelet[6216]: I0917 17:14:23.637677    6216 scope.go:117] "RemoveContainer" containerID="ff5311862f5502887b4334864c28b8041b7fd327bb20ec6b1fff6e7f2c8531b6"
	Sep 17 17:14:23 functional-334000 kubelet[6216]: E0917 17:14:23.637752    6216 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-sjwwb_default(e7274415-db5f-4307-8b31-cef7ad13570e)\"" pod="default/hello-node-64b4f8f9ff-sjwwb" podUID="e7274415-db5f-4307-8b31-cef7ad13570e"
	Sep 17 17:14:24 functional-334000 kubelet[6216]: E0917 17:14:24.600527    6216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="853a7577-92f7-4f8d-8518-71d57a9f6690" containerName="mount-munger"
	Sep 17 17:14:24 functional-334000 kubelet[6216]: I0917 17:14:24.600551    6216 memory_manager.go:354] "RemoveStaleState removing state" podUID="853a7577-92f7-4f8d-8518-71d57a9f6690" containerName="mount-munger"
	Sep 17 17:14:24 functional-334000 kubelet[6216]: I0917 17:14:24.724648    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v87ql\" (UniqueName: \"kubernetes.io/projected/7cb22b31-db82-4afd-aa11-e99f61436d3d-kube-api-access-v87ql\") pod \"dashboard-metrics-scraper-c5db448b4-f8dxv\" (UID: \"7cb22b31-db82-4afd-aa11-e99f61436d3d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-f8dxv"
	Sep 17 17:14:24 functional-334000 kubelet[6216]: I0917 17:14:24.724678    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fq42\" (UniqueName: \"kubernetes.io/projected/24cae2c2-613f-4a12-b36a-c23f19a0fb03-kube-api-access-5fq42\") pod \"kubernetes-dashboard-695b96c756-l227m\" (UID: \"24cae2c2-613f-4a12-b36a-c23f19a0fb03\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-l227m"
	Sep 17 17:14:24 functional-334000 kubelet[6216]: I0917 17:14:24.724691    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7cb22b31-db82-4afd-aa11-e99f61436d3d-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-f8dxv\" (UID: \"7cb22b31-db82-4afd-aa11-e99f61436d3d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-f8dxv"
	Sep 17 17:14:24 functional-334000 kubelet[6216]: I0917 17:14:24.724699    6216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/24cae2c2-613f-4a12-b36a-c23f19a0fb03-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-l227m\" (UID: \"24cae2c2-613f-4a12-b36a-c23f19a0fb03\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-l227m"
	Sep 17 17:14:25 functional-334000 kubelet[6216]: I0917 17:14:25.412367    6216 scope.go:117] "RemoveContainer" containerID="d42c7a08bd302231019924f31cef4ebbadf5fe6d9aca3849c5e77818d34f372f"
	Sep 17 17:14:25 functional-334000 kubelet[6216]: E0917 17:14:25.412550    6216 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-xjcfd_default(512c2ec2-9e94-4ad6-8b57-19db7c44aad4)\"" pod="default/hello-node-connect-65d86f57f4-xjcfd" podUID="512c2ec2-9e94-4ad6-8b57-19db7c44aad4"
	
	
	==> storage-provisioner [c9b1a8e0cf07] <==
	I0917 17:12:33.089716       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 17:12:33.093926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 17:12:33.093940       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 17:12:33.101884       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 17:12:33.102246       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-334000_e5e593b2-8602-4dda-b9cf-89d03c53615b!
	I0917 17:12:33.102267       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49db4028-d397-4008-8751-9e9ffd852865", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-334000_e5e593b2-8602-4dda-b9cf-89d03c53615b became leader
	I0917 17:12:33.206756       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-334000_e5e593b2-8602-4dda-b9cf-89d03c53615b!
	
	
	==> storage-provisioner [d1b02a9ddd8c] <==
	I0917 17:13:17.896095       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 17:13:17.908281       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 17:13:17.908313       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 17:13:35.318207       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 17:13:35.318354       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49db4028-d397-4008-8751-9e9ffd852865", APIVersion:"v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-334000_0316f0a2-3443-4d0b-885c-37fe1aa71564 became leader
	I0917 17:13:35.318774       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-334000_0316f0a2-3443-4d0b-885c-37fe1aa71564!
	I0917 17:13:35.419721       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-334000_0316f0a2-3443-4d0b-885c-37fe1aa71564!
	I0917 17:13:47.519046       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0917 17:13:47.519183       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"aec16037-f630-48a1-940b-b51c99283752", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0917 17:13:47.519109       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    a9de2acc-c0a8-49e7-b907-e0e6389ab3d0 300 0 2024-09-17 17:12:04 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-17 17:12:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-aec16037-f630-48a1-940b-b51c99283752 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  aec16037-f630-48a1-940b-b51c99283752 658 0 2024-09-17 17:13:47 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-17 17:13:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-17 17:13:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0917 17:13:47.519700       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-aec16037-f630-48a1-940b-b51c99283752" provisioned
	I0917 17:13:47.519712       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0917 17:13:47.519715       1 volume_store.go:212] Trying to save persistentvolume "pvc-aec16037-f630-48a1-940b-b51c99283752"
	I0917 17:13:47.524621       1 volume_store.go:219] persistentvolume "pvc-aec16037-f630-48a1-940b-b51c99283752" saved
	I0917 17:13:47.526311       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"aec16037-f630-48a1-940b-b51c99283752", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-aec16037-f630-48a1-940b-b51c99283752
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-334000 -n functional-334000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-334000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-695b96c756-l227m
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-334000 describe pod busybox-mount kubernetes-dashboard-695b96c756-l227m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-334000 describe pod busybox-mount kubernetes-dashboard-695b96c756-l227m: exit status 1 (38.863917ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-334000/192.168.105.4
	Start Time:       Tue, 17 Sep 2024 10:14:15 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://c877e9fc8e51fd9ab542a4c2c2faeb7611b28d80c233c12f9170910e568ec443
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 17 Sep 2024 10:14:17 -0700
	      Finished:     Tue, 17 Sep 2024 10:14:17 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zjsgw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zjsgw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  17s   default-scheduler  Successfully assigned default/busybox-mount to functional-334000
	  Normal  Pulling    16s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     15s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.451s (1.451s including waiting). Image size: 3547125 bytes.
	  Normal  Created    15s   kubelet            Created container mount-munger
	  Normal  Started    15s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-l227m" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-334000 describe pod busybox-mount kubernetes-dashboard-695b96c756-l227m: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (39.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 node stop m02 -v=7 --alsologtostderr
E0917 10:19:23.327105    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-468000 node stop m02 -v=7 --alsologtostderr: (12.194378292s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr
E0917 10:19:34.220359    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:20:04.289961    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:21:26.211865    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr: exit status 7 (2m55.992883167s)

                                                
                                                
-- stdout --
	ha-468000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-468000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-468000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-468000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:19:33.540009    3367 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:19:33.540218    3367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:19:33.540223    3367 out.go:358] Setting ErrFile to fd 2...
	I0917 10:19:33.540225    3367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:19:33.540398    3367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:19:33.540545    3367 out.go:352] Setting JSON to false
	I0917 10:19:33.540555    3367 mustload.go:65] Loading cluster: ha-468000
	I0917 10:19:33.540594    3367 notify.go:220] Checking for updates...
	I0917 10:19:33.540837    3367 config.go:182] Loaded profile config "ha-468000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:19:33.540845    3367 status.go:255] checking status of ha-468000 ...
	I0917 10:19:33.541745    3367 status.go:330] ha-468000 host status = "Running" (err=<nil>)
	I0917 10:19:33.541756    3367 host.go:66] Checking if "ha-468000" exists ...
	I0917 10:19:33.541888    3367 host.go:66] Checking if "ha-468000" exists ...
	I0917 10:19:33.542034    3367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:19:33.542043    3367 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/id_rsa Username:docker}
	W0917 10:19:59.462487    3367 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0917 10:19:59.462619    3367 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0917 10:19:59.462639    3367 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0917 10:19:59.462648    3367 status.go:257] ha-468000 status: &{Name:ha-468000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 10:19:59.462669    3367 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0917 10:19:59.462687    3367 status.go:255] checking status of ha-468000-m02 ...
	I0917 10:19:59.463207    3367 status.go:330] ha-468000-m02 host status = "Stopped" (err=<nil>)
	I0917 10:19:59.463220    3367 status.go:343] host is not running, skipping remaining checks
	I0917 10:19:59.463226    3367 status.go:257] ha-468000-m02 status: &{Name:ha-468000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:19:59.463239    3367 status.go:255] checking status of ha-468000-m03 ...
	I0917 10:19:59.464468    3367 status.go:330] ha-468000-m03 host status = "Running" (err=<nil>)
	I0917 10:19:59.464482    3367 host.go:66] Checking if "ha-468000-m03" exists ...
	I0917 10:19:59.464771    3367 host.go:66] Checking if "ha-468000-m03" exists ...
	I0917 10:19:59.465078    3367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:19:59.465093    3367 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m03/id_rsa Username:docker}
	W0917 10:21:14.466493    3367 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0917 10:21:14.466696    3367 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0917 10:21:14.466781    3367 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0917 10:21:14.466801    3367 status.go:257] ha-468000-m03 status: &{Name:ha-468000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 10:21:14.466838    3367 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0917 10:21:14.466864    3367 status.go:255] checking status of ha-468000-m04 ...
	I0917 10:21:14.470088    3367 status.go:330] ha-468000-m04 host status = "Running" (err=<nil>)
	I0917 10:21:14.470112    3367 host.go:66] Checking if "ha-468000-m04" exists ...
	I0917 10:21:14.470648    3367 host.go:66] Checking if "ha-468000-m04" exists ...
	I0917 10:21:14.471288    3367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:21:14.471324    3367 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m04/id_rsa Username:docker}
	W0917 10:22:29.471945    3367 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0917 10:22:29.471992    3367 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0917 10:22:29.472000    3367 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0917 10:22:29.472004    3367 status.go:257] ha-468000-m04 status: &{Name:ha-468000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0917 10:22:29.472013    3367 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr": ha-468000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-468000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-468000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-468000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr": ha-468000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-468000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-468000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-468000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr": ha-468000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-468000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-468000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-468000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000: exit status 3 (25.965320334s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 10:22:55.437083    3394 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0917 10:22:55.437095    3394 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-468000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0917 10:23:42.328091    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:24:06.483220    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:24:10.052164    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.535697791s)
ha_test.go:413: expected profile "ha-468000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-468000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-468000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-468000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000: exit status 3 (25.965326833s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 10:24:39.932307    3418 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0917 10:24:39.932358    3418 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-468000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-468000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.138116333s)

                                                
                                                
-- stdout --
	* Starting "ha-468000-m02" control-plane node in "ha-468000" cluster
	* Restarting existing qemu2 VM for "ha-468000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-468000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:24:40.006154    3424 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:24:40.006469    3424 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:24:40.006474    3424 out.go:358] Setting ErrFile to fd 2...
	I0917 10:24:40.006478    3424 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:24:40.006638    3424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:24:40.006954    3424 mustload.go:65] Loading cluster: ha-468000
	I0917 10:24:40.007288    3424 config.go:182] Loaded profile config "ha-468000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0917 10:24:40.007595    3424 host.go:58] "ha-468000-m02" host status: Stopped
	I0917 10:24:40.011994    3424 out.go:177] * Starting "ha-468000-m02" control-plane node in "ha-468000" cluster
	I0917 10:24:40.015056    3424 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:24:40.015080    3424 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:24:40.015095    3424 cache.go:56] Caching tarball of preloaded images
	I0917 10:24:40.015203    3424 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:24:40.015210    3424 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:24:40.015289    3424 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/ha-468000/config.json ...
	I0917 10:24:40.015697    3424 start.go:360] acquireMachinesLock for ha-468000-m02: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:24:40.015780    3424 start.go:364] duration metric: took 40.042µs to acquireMachinesLock for "ha-468000-m02"
	I0917 10:24:40.015790    3424 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:24:40.015797    3424 fix.go:54] fixHost starting: m02
	I0917 10:24:40.015917    3424 fix.go:112] recreateIfNeeded on ha-468000-m02: state=Stopped err=<nil>
	W0917 10:24:40.015923    3424 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:24:40.018969    3424 out.go:177] * Restarting existing qemu2 VM for "ha-468000-m02" ...
	I0917 10:24:40.022963    3424 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:24:40.023029    3424 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:18:40:91:e7:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/disk.qcow2
	I0917 10:24:40.025833    3424 main.go:141] libmachine: STDOUT: 
	I0917 10:24:40.025854    3424 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:24:40.025889    3424 fix.go:56] duration metric: took 10.091833ms for fixHost
	I0917 10:24:40.025897    3424 start.go:83] releasing machines lock for "ha-468000-m02", held for 10.107583ms
	W0917 10:24:40.025906    3424 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:24:40.025945    3424 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:24:40.025951    3424 start.go:729] Will try again in 5 seconds ...
	I0917 10:24:45.026077    3424 start.go:360] acquireMachinesLock for ha-468000-m02: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:24:45.026492    3424 start.go:364] duration metric: took 330.125µs to acquireMachinesLock for "ha-468000-m02"
	I0917 10:24:45.026673    3424 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:24:45.026695    3424 fix.go:54] fixHost starting: m02
	I0917 10:24:45.027494    3424 fix.go:112] recreateIfNeeded on ha-468000-m02: state=Stopped err=<nil>
	W0917 10:24:45.027523    3424 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:24:45.032846    3424 out.go:177] * Restarting existing qemu2 VM for "ha-468000-m02" ...
	I0917 10:24:45.036793    3424 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:24:45.036987    3424 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:18:40:91:e7:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/disk.qcow2
	I0917 10:24:45.046639    3424 main.go:141] libmachine: STDOUT: 
	I0917 10:24:45.046735    3424 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:24:45.046827    3424 fix.go:56] duration metric: took 20.134125ms for fixHost
	I0917 10:24:45.046843    3424 start.go:83] releasing machines lock for "ha-468000-m02", held for 20.314542ms
	W0917 10:24:45.046995    3424 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-468000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-468000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:24:45.051805    3424 out.go:201] 
	W0917 10:24:45.055853    3424 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:24:45.055882    3424 out.go:270] * 
	* 
	W0917 10:24:45.064113    3424 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:24:45.068818    3424 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0917 10:24:40.006154    3424 out.go:345] Setting OutFile to fd 1 ...
I0917 10:24:40.006469    3424 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:24:40.006474    3424 out.go:358] Setting ErrFile to fd 2...
I0917 10:24:40.006478    3424 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:24:40.006638    3424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
I0917 10:24:40.006954    3424 mustload.go:65] Loading cluster: ha-468000
I0917 10:24:40.007288    3424 config.go:182] Loaded profile config "ha-468000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0917 10:24:40.007595    3424 host.go:58] "ha-468000-m02" host status: Stopped
I0917 10:24:40.011994    3424 out.go:177] * Starting "ha-468000-m02" control-plane node in "ha-468000" cluster
I0917 10:24:40.015056    3424 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0917 10:24:40.015080    3424 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0917 10:24:40.015095    3424 cache.go:56] Caching tarball of preloaded images
I0917 10:24:40.015203    3424 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0917 10:24:40.015210    3424 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0917 10:24:40.015289    3424 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/ha-468000/config.json ...
I0917 10:24:40.015697    3424 start.go:360] acquireMachinesLock for ha-468000-m02: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0917 10:24:40.015780    3424 start.go:364] duration metric: took 40.042µs to acquireMachinesLock for "ha-468000-m02"
I0917 10:24:40.015790    3424 start.go:96] Skipping create...Using existing machine configuration
I0917 10:24:40.015797    3424 fix.go:54] fixHost starting: m02
I0917 10:24:40.015917    3424 fix.go:112] recreateIfNeeded on ha-468000-m02: state=Stopped err=<nil>
W0917 10:24:40.015923    3424 fix.go:138] unexpected machine state, will restart: <nil>
I0917 10:24:40.018969    3424 out.go:177] * Restarting existing qemu2 VM for "ha-468000-m02" ...
I0917 10:24:40.022963    3424 qemu.go:418] Using hvf for hardware acceleration
I0917 10:24:40.023029    3424 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:18:40:91:e7:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/disk.qcow2
I0917 10:24:40.025833    3424 main.go:141] libmachine: STDOUT: 
I0917 10:24:40.025854    3424 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0917 10:24:40.025889    3424 fix.go:56] duration metric: took 10.091833ms for fixHost
I0917 10:24:40.025897    3424 start.go:83] releasing machines lock for "ha-468000-m02", held for 10.107583ms
W0917 10:24:40.025906    3424 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0917 10:24:40.025945    3424 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0917 10:24:40.025951    3424 start.go:729] Will try again in 5 seconds ...
I0917 10:24:45.026077    3424 start.go:360] acquireMachinesLock for ha-468000-m02: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0917 10:24:45.026492    3424 start.go:364] duration metric: took 330.125µs to acquireMachinesLock for "ha-468000-m02"
I0917 10:24:45.026673    3424 start.go:96] Skipping create...Using existing machine configuration
I0917 10:24:45.026695    3424 fix.go:54] fixHost starting: m02
I0917 10:24:45.027494    3424 fix.go:112] recreateIfNeeded on ha-468000-m02: state=Stopped err=<nil>
W0917 10:24:45.027523    3424 fix.go:138] unexpected machine state, will restart: <nil>
I0917 10:24:45.032846    3424 out.go:177] * Restarting existing qemu2 VM for "ha-468000-m02" ...
I0917 10:24:45.036793    3424 qemu.go:418] Using hvf for hardware acceleration
I0917 10:24:45.036987    3424 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:18:40:91:e7:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m02/disk.qcow2
I0917 10:24:45.046639    3424 main.go:141] libmachine: STDOUT: 
I0917 10:24:45.046735    3424 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0917 10:24:45.046827    3424 fix.go:56] duration metric: took 20.134125ms for fixHost
I0917 10:24:45.046843    3424 start.go:83] releasing machines lock for "ha-468000-m02", held for 20.314542ms
W0917 10:24:45.046995    3424 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-468000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-468000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0917 10:24:45.051805    3424 out.go:201] 
W0917 10:24:45.055853    3424 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0917 10:24:45.055882    3424 out.go:270] * 
* 
W0917 10:24:45.064113    3424 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0917 10:24:45.068818    3424 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-468000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr: exit status 7 (2m57.34285475s)

                                                
                                                
-- stdout --
	ha-468000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-468000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-468000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-468000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:24:45.136150    3429 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:24:45.136363    3429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:24:45.136369    3429 out.go:358] Setting ErrFile to fd 2...
	I0917 10:24:45.136372    3429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:24:45.136537    3429 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:24:45.136683    3429 out.go:352] Setting JSON to false
	I0917 10:24:45.136697    3429 mustload.go:65] Loading cluster: ha-468000
	I0917 10:24:45.136748    3429 notify.go:220] Checking for updates...
	I0917 10:24:45.136999    3429 config.go:182] Loaded profile config "ha-468000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:45.137013    3429 status.go:255] checking status of ha-468000 ...
	I0917 10:24:45.137941    3429 status.go:330] ha-468000 host status = "Running" (err=<nil>)
	I0917 10:24:45.137961    3429 host.go:66] Checking if "ha-468000" exists ...
	I0917 10:24:45.138100    3429 host.go:66] Checking if "ha-468000" exists ...
	I0917 10:24:45.138236    3429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:24:45.138245    3429 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/id_rsa Username:docker}
	W0917 10:24:45.138459    3429 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0917 10:24:45.138476    3429 retry.go:31] will retry after 362.989728ms: dial tcp 192.168.105.5:22: connect: host is down
	W0917 10:24:45.504076    3429 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0917 10:24:45.504154    3429 retry.go:31] will retry after 498.440931ms: dial tcp 192.168.105.5:22: connect: host is down
	W0917 10:24:46.004102    3429 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0917 10:24:46.004192    3429 retry.go:31] will retry after 480.388833ms: dial tcp 192.168.105.5:22: connect: host is down
	W0917 10:25:12.411316    3429 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0917 10:25:12.411382    3429 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0917 10:25:12.411392    3429 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0917 10:25:12.411398    3429 status.go:257] ha-468000 status: &{Name:ha-468000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 10:25:12.411408    3429 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0917 10:25:12.411411    3429 status.go:255] checking status of ha-468000-m02 ...
	I0917 10:25:12.411624    3429 status.go:330] ha-468000-m02 host status = "Stopped" (err=<nil>)
	I0917 10:25:12.411629    3429 status.go:343] host is not running, skipping remaining checks
	I0917 10:25:12.411631    3429 status.go:257] ha-468000-m02 status: &{Name:ha-468000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:25:12.411635    3429 status.go:255] checking status of ha-468000-m03 ...
	I0917 10:25:12.412204    3429 status.go:330] ha-468000-m03 host status = "Running" (err=<nil>)
	I0917 10:25:12.412209    3429 host.go:66] Checking if "ha-468000-m03" exists ...
	I0917 10:25:12.412325    3429 host.go:66] Checking if "ha-468000-m03" exists ...
	I0917 10:25:12.412454    3429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:25:12.412460    3429 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m03/id_rsa Username:docker}
	W0917 10:26:27.412627    3429 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0917 10:26:27.412759    3429 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0917 10:26:27.412777    3429 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0917 10:26:27.412785    3429 status.go:257] ha-468000-m03 status: &{Name:ha-468000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 10:26:27.412804    3429 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0917 10:26:27.412812    3429 status.go:255] checking status of ha-468000-m04 ...
	I0917 10:26:27.414353    3429 status.go:330] ha-468000-m04 host status = "Running" (err=<nil>)
	I0917 10:26:27.414368    3429 host.go:66] Checking if "ha-468000-m04" exists ...
	I0917 10:26:27.414629    3429 host.go:66] Checking if "ha-468000-m04" exists ...
	I0917 10:26:27.414892    3429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:26:27.414905    3429 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000-m04/id_rsa Username:docker}
	W0917 10:27:42.415524    3429 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0917 10:27:42.415569    3429 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0917 10:27:42.415578    3429 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0917 10:27:42.415582    3429 status.go:257] ha-468000-m04 status: &{Name:ha-468000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0917 10:27:42.415592    3429 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000: exit status 3 (25.963064667s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 10:28:08.378270    3456 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0917 10:28:08.378280    3456 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-468000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-468000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-468000 -v=7 --alsologtostderr
E0917 10:30:29.570105    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-468000 -v=7 --alsologtostderr: (3m49.007783167s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-468000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-468000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.227775875s)

                                                
                                                
-- stdout --
	* [ha-468000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-468000" primary control-plane node in "ha-468000" cluster
	* Restarting existing qemu2 VM for "ha-468000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-468000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:33:16.743322    3871 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:33:16.743525    3871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:33:16.743529    3871 out.go:358] Setting ErrFile to fd 2...
	I0917 10:33:16.743533    3871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:33:16.743698    3871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:33:16.744907    3871 out.go:352] Setting JSON to false
	I0917 10:33:16.764306    3871 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3759,"bootTime":1726590637,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:33:16.764379    3871 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:33:16.768626    3871 out.go:177] * [ha-468000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:33:16.775554    3871 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:33:16.775606    3871 notify.go:220] Checking for updates...
	I0917 10:33:16.784483    3871 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:33:16.787557    3871 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:33:16.788965    3871 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:33:16.792535    3871 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:33:16.795531    3871 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:33:16.798912    3871 config.go:182] Loaded profile config "ha-468000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:33:16.798957    3871 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:33:16.803432    3871 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:33:16.810523    3871 start.go:297] selected driver: qemu2
	I0917 10:33:16.810531    3871 start.go:901] validating driver "qemu2" against &{Name:ha-468000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-468000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:33:16.810629    3871 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:33:16.813313    3871 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:33:16.813336    3871 cni.go:84] Creating CNI manager for ""
	I0917 10:33:16.813366    3871 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 10:33:16.813419    3871 start.go:340] cluster config:
	{Name:ha-468000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-468000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:33:16.817479    3871 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:33:16.826559    3871 out.go:177] * Starting "ha-468000" primary control-plane node in "ha-468000" cluster
	I0917 10:33:16.830532    3871 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:33:16.830549    3871 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:33:16.830558    3871 cache.go:56] Caching tarball of preloaded images
	I0917 10:33:16.830623    3871 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:33:16.830628    3871 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:33:16.830711    3871 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/ha-468000/config.json ...
	I0917 10:33:16.831172    3871 start.go:360] acquireMachinesLock for ha-468000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:33:16.831210    3871 start.go:364] duration metric: took 31.584µs to acquireMachinesLock for "ha-468000"
	I0917 10:33:16.831219    3871 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:33:16.831224    3871 fix.go:54] fixHost starting: 
	I0917 10:33:16.831349    3871 fix.go:112] recreateIfNeeded on ha-468000: state=Stopped err=<nil>
	W0917 10:33:16.831358    3871 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:33:16.835632    3871 out.go:177] * Restarting existing qemu2 VM for "ha-468000" ...
	I0917 10:33:16.843440    3871 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:33:16.843478    3871 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:bc:00:ed:6e:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/disk.qcow2
	I0917 10:33:16.845515    3871 main.go:141] libmachine: STDOUT: 
	I0917 10:33:16.845535    3871 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:33:16.845569    3871 fix.go:56] duration metric: took 14.344625ms for fixHost
	I0917 10:33:16.845575    3871 start.go:83] releasing machines lock for "ha-468000", held for 14.360459ms
	W0917 10:33:16.845581    3871 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:33:16.845622    3871 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:33:16.845626    3871 start.go:729] Will try again in 5 seconds ...
	I0917 10:33:21.847692    3871 start.go:360] acquireMachinesLock for ha-468000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:33:21.848052    3871 start.go:364] duration metric: took 283.667µs to acquireMachinesLock for "ha-468000"
	I0917 10:33:21.848198    3871 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:33:21.848215    3871 fix.go:54] fixHost starting: 
	I0917 10:33:21.848909    3871 fix.go:112] recreateIfNeeded on ha-468000: state=Stopped err=<nil>
	W0917 10:33:21.848936    3871 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:33:21.853412    3871 out.go:177] * Restarting existing qemu2 VM for "ha-468000" ...
	I0917 10:33:21.860323    3871 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:33:21.860556    3871 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:bc:00:ed:6e:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/disk.qcow2
	I0917 10:33:21.869702    3871 main.go:141] libmachine: STDOUT: 
	I0917 10:33:21.869784    3871 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:33:21.869874    3871 fix.go:56] duration metric: took 21.657333ms for fixHost
	I0917 10:33:21.869903    3871 start.go:83] releasing machines lock for "ha-468000", held for 21.828416ms
	W0917 10:33:21.870153    3871 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-468000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-468000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:33:21.877244    3871 out.go:201] 
	W0917 10:33:21.881408    3871 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:33:21.881432    3871 out.go:270] * 
	* 
	W0917 10:33:21.884104    3871 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:33:21.895321    3871 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-468000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-468000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000: exit status 7 (33.750667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-468000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-468000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.681833ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-468000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-468000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:33:22.037352    3884 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:33:22.037603    3884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:33:22.037606    3884 out.go:358] Setting ErrFile to fd 2...
	I0917 10:33:22.037609    3884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:33:22.037754    3884 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:33:22.037995    3884 mustload.go:65] Loading cluster: ha-468000
	I0917 10:33:22.038242    3884 config.go:182] Loaded profile config "ha-468000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0917 10:33:22.038547    3884 out.go:270] ! The control-plane node ha-468000 host is not running (will try others): state=Stopped
	! The control-plane node ha-468000 host is not running (will try others): state=Stopped
	W0917 10:33:22.038648    3884 out.go:270] ! The control-plane node ha-468000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-468000-m02 host is not running (will try others): state=Stopped
	I0917 10:33:22.042432    3884 out.go:177] * The control-plane node ha-468000-m03 host is not running: state=Stopped
	I0917 10:33:22.045369    3884 out.go:177]   To start a cluster, run: "minikube start -p ha-468000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-468000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr: exit status 7 (30.546125ms)

                                                
                                                
-- stdout --
	ha-468000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-468000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-468000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-468000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:33:22.077812    3886 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:33:22.077973    3886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:33:22.077977    3886 out.go:358] Setting ErrFile to fd 2...
	I0917 10:33:22.077980    3886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:33:22.078103    3886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:33:22.078219    3886 out.go:352] Setting JSON to false
	I0917 10:33:22.078229    3886 mustload.go:65] Loading cluster: ha-468000
	I0917 10:33:22.078291    3886 notify.go:220] Checking for updates...
	I0917 10:33:22.078463    3886 config.go:182] Loaded profile config "ha-468000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:33:22.078470    3886 status.go:255] checking status of ha-468000 ...
	I0917 10:33:22.078706    3886 status.go:330] ha-468000 host status = "Stopped" (err=<nil>)
	I0917 10:33:22.078710    3886 status.go:343] host is not running, skipping remaining checks
	I0917 10:33:22.078712    3886 status.go:257] ha-468000 status: &{Name:ha-468000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:33:22.078721    3886 status.go:255] checking status of ha-468000-m02 ...
	I0917 10:33:22.078809    3886 status.go:330] ha-468000-m02 host status = "Stopped" (err=<nil>)
	I0917 10:33:22.078812    3886 status.go:343] host is not running, skipping remaining checks
	I0917 10:33:22.078814    3886 status.go:257] ha-468000-m02 status: &{Name:ha-468000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:33:22.078818    3886 status.go:255] checking status of ha-468000-m03 ...
	I0917 10:33:22.078905    3886 status.go:330] ha-468000-m03 host status = "Stopped" (err=<nil>)
	I0917 10:33:22.078907    3886 status.go:343] host is not running, skipping remaining checks
	I0917 10:33:22.078909    3886 status.go:257] ha-468000-m03 status: &{Name:ha-468000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:33:22.078912    3886 status.go:255] checking status of ha-468000-m04 ...
	I0917 10:33:22.079015    3886 status.go:330] ha-468000-m04 host status = "Stopped" (err=<nil>)
	I0917 10:33:22.079018    3886 status.go:343] host is not running, skipping remaining checks
	I0917 10:33:22.079019    3886 status.go:257] ha-468000-m04 status: &{Name:ha-468000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000: exit status 7 (29.916792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-468000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-468000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-468000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-468000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-468000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000: exit status 7 (30.596375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-468000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 stop -v=7 --alsologtostderr
E0917 10:33:42.293587    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:34:06.433306    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:35:05.360154    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-468000 stop -v=7 --alsologtostderr: (3m21.972955958s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr: exit status 7 (66.190167ms)

                                                
                                                
-- stdout --
	ha-468000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-468000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-468000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-468000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:36:44.176684    3919 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:36:44.176932    3919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:36:44.176937    3919 out.go:358] Setting ErrFile to fd 2...
	I0917 10:36:44.176940    3919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:36:44.177119    3919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:36:44.177274    3919 out.go:352] Setting JSON to false
	I0917 10:36:44.177285    3919 mustload.go:65] Loading cluster: ha-468000
	I0917 10:36:44.177325    3919 notify.go:220] Checking for updates...
	I0917 10:36:44.177583    3919 config.go:182] Loaded profile config "ha-468000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:36:44.177593    3919 status.go:255] checking status of ha-468000 ...
	I0917 10:36:44.177894    3919 status.go:330] ha-468000 host status = "Stopped" (err=<nil>)
	I0917 10:36:44.177898    3919 status.go:343] host is not running, skipping remaining checks
	I0917 10:36:44.177901    3919 status.go:257] ha-468000 status: &{Name:ha-468000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:36:44.177913    3919 status.go:255] checking status of ha-468000-m02 ...
	I0917 10:36:44.178055    3919 status.go:330] ha-468000-m02 host status = "Stopped" (err=<nil>)
	I0917 10:36:44.178061    3919 status.go:343] host is not running, skipping remaining checks
	I0917 10:36:44.178063    3919 status.go:257] ha-468000-m02 status: &{Name:ha-468000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:36:44.178069    3919 status.go:255] checking status of ha-468000-m03 ...
	I0917 10:36:44.178200    3919 status.go:330] ha-468000-m03 host status = "Stopped" (err=<nil>)
	I0917 10:36:44.178204    3919 status.go:343] host is not running, skipping remaining checks
	I0917 10:36:44.178206    3919 status.go:257] ha-468000-m03 status: &{Name:ha-468000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:36:44.178214    3919 status.go:255] checking status of ha-468000-m04 ...
	I0917 10:36:44.178345    3919 status.go:330] ha-468000-m04 host status = "Stopped" (err=<nil>)
	I0917 10:36:44.178349    3919 status.go:343] host is not running, skipping remaining checks
	I0917 10:36:44.178351    3919 status.go:257] ha-468000-m04 status: &{Name:ha-468000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr": ha-468000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-468000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-468000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-468000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr": ha-468000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-468000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-468000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-468000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr": ha-468000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-468000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-468000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-468000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000: exit status 7 (32.322458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-468000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-468000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-468000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.193199875s)

                                                
                                                
-- stdout --
	* [ha-468000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-468000" primary control-plane node in "ha-468000" cluster
	* Restarting existing qemu2 VM for "ha-468000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-468000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:36:44.240902    3923 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:36:44.241034    3923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:36:44.241037    3923 out.go:358] Setting ErrFile to fd 2...
	I0917 10:36:44.241040    3923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:36:44.241167    3923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:36:44.242190    3923 out.go:352] Setting JSON to false
	I0917 10:36:44.258159    3923 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3967,"bootTime":1726590637,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:36:44.258235    3923 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:36:44.263338    3923 out.go:177] * [ha-468000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:36:44.271328    3923 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:36:44.271380    3923 notify.go:220] Checking for updates...
	I0917 10:36:44.279255    3923 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:36:44.283161    3923 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:36:44.286261    3923 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:36:44.289294    3923 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:36:44.292248    3923 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:36:44.295527    3923 config.go:182] Loaded profile config "ha-468000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:36:44.295787    3923 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:36:44.300233    3923 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:36:44.307253    3923 start.go:297] selected driver: qemu2
	I0917 10:36:44.307261    3923 start.go:901] validating driver "qemu2" against &{Name:ha-468000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-468000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:36:44.307349    3923 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:36:44.309786    3923 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:36:44.309813    3923 cni.go:84] Creating CNI manager for ""
	I0917 10:36:44.309840    3923 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 10:36:44.309907    3923 start.go:340] cluster config:
	{Name:ha-468000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-468000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:36:44.313669    3923 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:36:44.322221    3923 out.go:177] * Starting "ha-468000" primary control-plane node in "ha-468000" cluster
	I0917 10:36:44.326307    3923 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:36:44.326324    3923 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:36:44.326335    3923 cache.go:56] Caching tarball of preloaded images
	I0917 10:36:44.326409    3923 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:36:44.326415    3923 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:36:44.326489    3923 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/ha-468000/config.json ...
	I0917 10:36:44.326949    3923 start.go:360] acquireMachinesLock for ha-468000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:36:44.326986    3923 start.go:364] duration metric: took 30.334µs to acquireMachinesLock for "ha-468000"
	I0917 10:36:44.326995    3923 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:36:44.327002    3923 fix.go:54] fixHost starting: 
	I0917 10:36:44.327123    3923 fix.go:112] recreateIfNeeded on ha-468000: state=Stopped err=<nil>
	W0917 10:36:44.327133    3923 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:36:44.331252    3923 out.go:177] * Restarting existing qemu2 VM for "ha-468000" ...
	I0917 10:36:44.339079    3923 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:36:44.339117    3923 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:bc:00:ed:6e:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/disk.qcow2
	I0917 10:36:44.341191    3923 main.go:141] libmachine: STDOUT: 
	I0917 10:36:44.341211    3923 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:36:44.341243    3923 fix.go:56] duration metric: took 14.24175ms for fixHost
	I0917 10:36:44.341248    3923 start.go:83] releasing machines lock for "ha-468000", held for 14.25825ms
	W0917 10:36:44.341254    3923 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:36:44.341291    3923 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:36:44.341296    3923 start.go:729] Will try again in 5 seconds ...
	I0917 10:36:49.343475    3923 start.go:360] acquireMachinesLock for ha-468000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:36:49.344038    3923 start.go:364] duration metric: took 386.042µs to acquireMachinesLock for "ha-468000"
	I0917 10:36:49.344175    3923 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:36:49.344196    3923 fix.go:54] fixHost starting: 
	I0917 10:36:49.344974    3923 fix.go:112] recreateIfNeeded on ha-468000: state=Stopped err=<nil>
	W0917 10:36:49.345002    3923 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:36:49.350493    3923 out.go:177] * Restarting existing qemu2 VM for "ha-468000" ...
	I0917 10:36:49.359418    3923 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:36:49.359636    3923 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:bc:00:ed:6e:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/ha-468000/disk.qcow2
	I0917 10:36:49.369335    3923 main.go:141] libmachine: STDOUT: 
	I0917 10:36:49.369414    3923 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:36:49.369510    3923 fix.go:56] duration metric: took 25.31575ms for fixHost
	I0917 10:36:49.369535    3923 start.go:83] releasing machines lock for "ha-468000", held for 25.477834ms
	W0917 10:36:49.369731    3923 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-468000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-468000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:36:49.377390    3923 out.go:201] 
	W0917 10:36:49.381510    3923 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:36:49.381546    3923 out.go:270] * 
	* 
	W0917 10:36:49.384300    3923 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:36:49.394457    3923 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-468000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000: exit status 7 (70.054709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-468000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-468000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-468000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-468000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-468000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000: exit status 7 (30.384625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-468000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-468000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-468000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.104ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-468000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-468000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:36:49.586492    3949 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:36:49.586653    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:36:49.586656    3949 out.go:358] Setting ErrFile to fd 2...
	I0917 10:36:49.586658    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:36:49.586790    3949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:36:49.587016    3949 mustload.go:65] Loading cluster: ha-468000
	I0917 10:36:49.587247    3949 config.go:182] Loaded profile config "ha-468000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0917 10:36:49.587549    3949 out.go:270] ! The control-plane node ha-468000 host is not running (will try others): state=Stopped
	! The control-plane node ha-468000 host is not running (will try others): state=Stopped
	W0917 10:36:49.587648    3949 out.go:270] ! The control-plane node ha-468000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-468000-m02 host is not running (will try others): state=Stopped
	I0917 10:36:49.592022    3949 out.go:177] * The control-plane node ha-468000-m03 host is not running: state=Stopped
	I0917 10:36:49.596001    3949 out.go:177]   To start a cluster, run: "minikube start -p ha-468000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-468000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-468000 -n ha-468000: exit status 7 (29.505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-468000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-645000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-645000 --driver=qemu2 : exit status 80 (10.227430125s)

                                                
                                                
-- stdout --
	* [image-645000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-645000" primary control-plane node in "image-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-645000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-645000 -n image-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-645000 -n image-645000: exit status 7 (68.316375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.30s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-843000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-843000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.805885875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4122cd69-7b13-4ecf-b45a-df270b1e92b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-843000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"86b52f3a-7379-4a86-91dd-8046ce3339e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"5baefcc3-3cfe-4338-b64f-7b5779522b91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig"}}
	{"specversion":"1.0","id":"f44f2dff-24be-4bba-bf8f-11c83c4c3834","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ce0596f5-eec0-42c2-aece-19359e7aae44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9065cb3b-4fac-4506-a9fe-2e803c8aeb8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube"}}
	{"specversion":"1.0","id":"3784f3fb-a425-4577-8e64-1b3d94aabca7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e29a725d-5e06-4b84-af86-7ee943800a97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3750510-c82c-4e0d-9a39-1e909cf7bb55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"0e3484f6-3e4d-44d3-b6c2-da7454b98b09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-843000\" primary control-plane node in \"json-output-843000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed49c264-59fe-426b-b29f-d6bc72d10061","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"8be801c2-100c-4420-aab7-3ed3452a6ec7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-843000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f4de62b-5524-4d6b-82d7-a91a01cdb6e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"fa080726-a322-4b66-af9e-78612f6fcbbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"52cc5040-89eb-4a81-9c5e-d9dfb7e87531","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-843000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"576d3e3c-3ccb-4735-a18f-a432c3da2f29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"1946176a-b925-42dd-a4a9-c774656424d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-843000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-843000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-843000 --output=json --user=testUser: exit status 83 (79.868333ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"98956b85-16d9-4129-b951-81f8fe270ace","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-843000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"4ccc2464-2bf2-4e6b-a251-abf994d57951","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-843000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-843000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-843000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-843000 --output=json --user=testUser: exit status 83 (48.577916ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-843000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-843000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-843000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-843000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-696000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-696000 --driver=qemu2 : exit status 80 (9.913061083s)

                                                
                                                
-- stdout --
	* [first-696000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-696000" primary control-plane node in "first-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-696000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-17 10:37:24.186054 -0700 PDT m=+2523.201584584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-697000 -n second-697000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-697000 -n second-697000: exit status 85 (83.810958ms)

                                                
                                                
-- stdout --
	* Profile "second-697000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-697000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-697000" host is not running, skipping log retrieval (state="* Profile \"second-697000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-697000\"")
helpers_test.go:175: Cleaning up "second-697000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-697000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-17 10:37:24.373041 -0700 PDT m=+2523.388577001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-696000 -n first-696000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-696000 -n first-696000: exit status 7 (30.43275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-696000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-696000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-696000
--- FAIL: TestMinikubeProfile (10.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-057000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-057000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.854441708s)

                                                
                                                
-- stdout --
	* [mount-start-1-057000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-057000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-057000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-057000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-057000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-057000 -n mount-start-1-057000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-057000 -n mount-start-1-057000: exit status 7 (69.659084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-057000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-404000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-404000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.885946834s)

                                                
                                                
-- stdout --
	* [multinode-404000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-404000" primary control-plane node in "multinode-404000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-404000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:37:34.621196    4099 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:37:34.621334    4099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:37:34.621338    4099 out.go:358] Setting ErrFile to fd 2...
	I0917 10:37:34.621340    4099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:37:34.621486    4099 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:37:34.622528    4099 out.go:352] Setting JSON to false
	I0917 10:37:34.638738    4099 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4017,"bootTime":1726590637,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:37:34.638810    4099 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:37:34.644653    4099 out.go:177] * [multinode-404000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:37:34.652544    4099 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:37:34.652612    4099 notify.go:220] Checking for updates...
	I0917 10:37:34.660606    4099 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:37:34.663625    4099 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:37:34.666601    4099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:37:34.669607    4099 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:37:34.672509    4099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:37:34.675809    4099 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:37:34.680628    4099 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:37:34.687580    4099 start.go:297] selected driver: qemu2
	I0917 10:37:34.687586    4099 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:37:34.687592    4099 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:37:34.689844    4099 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:37:34.692637    4099 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:37:34.694254    4099 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:37:34.694269    4099 cni.go:84] Creating CNI manager for ""
	I0917 10:37:34.694287    4099 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0917 10:37:34.694290    4099 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 10:37:34.694317    4099 start.go:340] cluster config:
	{Name:multinode-404000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:37:34.698071    4099 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:37:34.705643    4099 out.go:177] * Starting "multinode-404000" primary control-plane node in "multinode-404000" cluster
	I0917 10:37:34.709612    4099 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:37:34.709627    4099 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:37:34.709634    4099 cache.go:56] Caching tarball of preloaded images
	I0917 10:37:34.709692    4099 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:37:34.709698    4099 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:37:34.709903    4099 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/multinode-404000/config.json ...
	I0917 10:37:34.709914    4099 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/multinode-404000/config.json: {Name:mk54e82fd64e17864b726e00826bd8633eb622e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:37:34.710152    4099 start.go:360] acquireMachinesLock for multinode-404000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:37:34.710187    4099 start.go:364] duration metric: took 29.416µs to acquireMachinesLock for "multinode-404000"
	I0917 10:37:34.710199    4099 start.go:93] Provisioning new machine with config: &{Name:multinode-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:37:34.710224    4099 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:37:34.717593    4099 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:37:34.735134    4099 start.go:159] libmachine.API.Create for "multinode-404000" (driver="qemu2")
	I0917 10:37:34.735165    4099 client.go:168] LocalClient.Create starting
	I0917 10:37:34.735234    4099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:37:34.735264    4099 main.go:141] libmachine: Decoding PEM data...
	I0917 10:37:34.735274    4099 main.go:141] libmachine: Parsing certificate...
	I0917 10:37:34.735319    4099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:37:34.735345    4099 main.go:141] libmachine: Decoding PEM data...
	I0917 10:37:34.735353    4099 main.go:141] libmachine: Parsing certificate...
	I0917 10:37:34.735730    4099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:37:34.893655    4099 main.go:141] libmachine: Creating SSH key...
	I0917 10:37:34.985099    4099 main.go:141] libmachine: Creating Disk image...
	I0917 10:37:34.985104    4099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:37:34.985285    4099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2
	I0917 10:37:34.994331    4099 main.go:141] libmachine: STDOUT: 
	I0917 10:37:34.994348    4099 main.go:141] libmachine: STDERR: 
	I0917 10:37:34.994403    4099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2 +20000M
	I0917 10:37:35.002269    4099 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:37:35.002284    4099 main.go:141] libmachine: STDERR: 
	I0917 10:37:35.002297    4099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2
	I0917 10:37:35.002309    4099 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:37:35.002331    4099 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:37:35.002358    4099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:e5:e9:ac:67:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2
	I0917 10:37:35.003960    4099 main.go:141] libmachine: STDOUT: 
	I0917 10:37:35.003973    4099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:37:35.003992    4099 client.go:171] duration metric: took 268.82925ms to LocalClient.Create
	I0917 10:37:37.006114    4099 start.go:128] duration metric: took 2.29593775s to createHost
	I0917 10:37:37.006166    4099 start.go:83] releasing machines lock for "multinode-404000", held for 2.296040625s
	W0917 10:37:37.006205    4099 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:37:37.019609    4099 out.go:177] * Deleting "multinode-404000" in qemu2 ...
	W0917 10:37:37.053609    4099 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:37:37.053639    4099 start.go:729] Will try again in 5 seconds ...
	I0917 10:37:42.055737    4099 start.go:360] acquireMachinesLock for multinode-404000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:37:42.056270    4099 start.go:364] duration metric: took 417.459µs to acquireMachinesLock for "multinode-404000"
	I0917 10:37:42.056402    4099 start.go:93] Provisioning new machine with config: &{Name:multinode-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:37:42.056667    4099 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:37:42.066348    4099 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:37:42.117586    4099 start.go:159] libmachine.API.Create for "multinode-404000" (driver="qemu2")
	I0917 10:37:42.117646    4099 client.go:168] LocalClient.Create starting
	I0917 10:37:42.117801    4099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:37:42.117863    4099 main.go:141] libmachine: Decoding PEM data...
	I0917 10:37:42.117880    4099 main.go:141] libmachine: Parsing certificate...
	I0917 10:37:42.117983    4099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:37:42.118043    4099 main.go:141] libmachine: Decoding PEM data...
	I0917 10:37:42.118063    4099 main.go:141] libmachine: Parsing certificate...
	I0917 10:37:42.118606    4099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:37:42.288173    4099 main.go:141] libmachine: Creating SSH key...
	I0917 10:37:42.407747    4099 main.go:141] libmachine: Creating Disk image...
	I0917 10:37:42.407752    4099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:37:42.407923    4099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2
	I0917 10:37:42.417130    4099 main.go:141] libmachine: STDOUT: 
	I0917 10:37:42.417153    4099 main.go:141] libmachine: STDERR: 
	I0917 10:37:42.417211    4099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2 +20000M
	I0917 10:37:42.425023    4099 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:37:42.425037    4099 main.go:141] libmachine: STDERR: 
	I0917 10:37:42.425052    4099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2
	I0917 10:37:42.425058    4099 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:37:42.425066    4099 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:37:42.425100    4099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:aa:96:09:30:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2
	I0917 10:37:42.426714    4099 main.go:141] libmachine: STDOUT: 
	I0917 10:37:42.426729    4099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:37:42.426743    4099 client.go:171] duration metric: took 309.10075ms to LocalClient.Create
	I0917 10:37:44.428853    4099 start.go:128] duration metric: took 2.372207375s to createHost
	I0917 10:37:44.428911    4099 start.go:83] releasing machines lock for "multinode-404000", held for 2.372686625s
	W0917 10:37:44.429339    4099 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:37:44.438944    4099 out.go:201] 
	W0917 10:37:44.450790    4099 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:37:44.450808    4099 out.go:270] * 
	* 
	W0917 10:37:44.453367    4099 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:37:44.463826    4099 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-404000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (67.475375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (108.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (126.936583ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-404000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- rollout status deployment/busybox: exit status 1 (59.440417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.530292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.973708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.311333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.966166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.834458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.16575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.203917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.914458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.306292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0917 10:38:42.265367    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.744333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0917 10:39:06.420184    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.70475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.251875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.207ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.229959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.356375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (30.81225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (108.20s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-404000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.696209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (30.686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-404000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-404000 -v 3 --alsologtostderr: exit status 83 (42.520541ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-404000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-404000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:39:32.863899    4193 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:39:32.864063    4193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:32.864067    4193 out.go:358] Setting ErrFile to fd 2...
	I0917 10:39:32.864069    4193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:32.864196    4193 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:39:32.864432    4193 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:39:32.864638    4193 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:39:32.868921    4193 out.go:177] * The control-plane node multinode-404000 host is not running: state=Stopped
	I0917 10:39:32.871766    4193 out.go:177]   To start a cluster, run: "minikube start -p multinode-404000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-404000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (30.496542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-404000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-404000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.740584ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-404000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-404000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-404000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (30.816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-404000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-404000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-404000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-404000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (30.978333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status --output json --alsologtostderr: exit status 7 (30.52675ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-404000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:39:33.076731    4205 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:39:33.076889    4205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:33.076892    4205 out.go:358] Setting ErrFile to fd 2...
	I0917 10:39:33.076895    4205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:33.077026    4205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:39:33.077144    4205 out.go:352] Setting JSON to true
	I0917 10:39:33.077154    4205 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:39:33.077208    4205 notify.go:220] Checking for updates...
	I0917 10:39:33.077362    4205 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:39:33.077372    4205 status.go:255] checking status of multinode-404000 ...
	I0917 10:39:33.077606    4205 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:39:33.077609    4205 status.go:343] host is not running, skipping remaining checks
	I0917 10:39:33.077611    4205 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-404000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (30.872166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 node stop m03: exit status 85 (48.566208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-404000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status: exit status 7 (30.447417ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status --alsologtostderr: exit status 7 (30.687583ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:39:33.218122    4213 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:39:33.218280    4213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:33.218283    4213 out.go:358] Setting ErrFile to fd 2...
	I0917 10:39:33.218286    4213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:33.218412    4213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:39:33.218540    4213 out.go:352] Setting JSON to false
	I0917 10:39:33.218550    4213 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:39:33.218628    4213 notify.go:220] Checking for updates...
	I0917 10:39:33.218787    4213 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:39:33.218794    4213 status.go:255] checking status of multinode-404000 ...
	I0917 10:39:33.219042    4213 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:39:33.219046    4213 status.go:343] host is not running, skipping remaining checks
	I0917 10:39:33.219048    4213 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-404000 status --alsologtostderr": multinode-404000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (30.376833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (49.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.0145ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:39:33.279601    4217 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:39:33.279851    4217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:33.279854    4217 out.go:358] Setting ErrFile to fd 2...
	I0917 10:39:33.279857    4217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:33.279997    4217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:39:33.280241    4217 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:39:33.280432    4217 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:39:33.283887    4217 out.go:201] 
	W0917 10:39:33.286797    4217 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0917 10:39:33.286802    4217 out.go:270] * 
	* 
	W0917 10:39:33.288532    4217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:39:33.291734    4217 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0917 10:39:33.279601    4217 out.go:345] Setting OutFile to fd 1 ...
I0917 10:39:33.279851    4217 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:39:33.279854    4217 out.go:358] Setting ErrFile to fd 2...
I0917 10:39:33.279857    4217 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:39:33.279997    4217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
I0917 10:39:33.280241    4217 mustload.go:65] Loading cluster: multinode-404000
I0917 10:39:33.280432    4217 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:39:33.283887    4217 out.go:201] 
W0917 10:39:33.286797    4217 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0917 10:39:33.286802    4217 out.go:270] * 
* 
W0917 10:39:33.288532    4217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0917 10:39:33.291734    4217 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-404000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr: exit status 7 (30.613625ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:39:33.324668    4219 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:39:33.324827    4219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:33.324831    4219 out.go:358] Setting ErrFile to fd 2...
	I0917 10:39:33.324833    4219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:33.324946    4219 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:39:33.325086    4219 out.go:352] Setting JSON to false
	I0917 10:39:33.325094    4219 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:39:33.325162    4219 notify.go:220] Checking for updates...
	I0917 10:39:33.325335    4219 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:39:33.325345    4219 status.go:255] checking status of multinode-404000 ...
	I0917 10:39:33.325574    4219 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:39:33.325578    4219 status.go:343] host is not running, skipping remaining checks
	I0917 10:39:33.325580    4219 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr: exit status 7 (74.3195ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:39:34.178854    4221 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:39:34.179069    4221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:34.179073    4221 out.go:358] Setting ErrFile to fd 2...
	I0917 10:39:34.179076    4221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:34.179255    4221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:39:34.179405    4221 out.go:352] Setting JSON to false
	I0917 10:39:34.179417    4221 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:39:34.179457    4221 notify.go:220] Checking for updates...
	I0917 10:39:34.179692    4221 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:39:34.179702    4221 status.go:255] checking status of multinode-404000 ...
	I0917 10:39:34.180002    4221 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:39:34.180006    4221 status.go:343] host is not running, skipping remaining checks
	I0917 10:39:34.180009    4221 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr: exit status 7 (74.665542ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:39:36.069413    4223 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:39:36.069619    4223 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:36.069623    4223 out.go:358] Setting ErrFile to fd 2...
	I0917 10:39:36.069627    4223 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:36.069801    4223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:39:36.069967    4223 out.go:352] Setting JSON to false
	I0917 10:39:36.069982    4223 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:39:36.070024    4223 notify.go:220] Checking for updates...
	I0917 10:39:36.070230    4223 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:39:36.070238    4223 status.go:255] checking status of multinode-404000 ...
	I0917 10:39:36.070535    4223 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:39:36.070540    4223 status.go:343] host is not running, skipping remaining checks
	I0917 10:39:36.070543    4223 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr: exit status 7 (73.206375ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:39:37.826270    4225 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:39:37.826491    4225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:37.826496    4225 out.go:358] Setting ErrFile to fd 2...
	I0917 10:39:37.826499    4225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:37.826666    4225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:39:37.826827    4225 out.go:352] Setting JSON to false
	I0917 10:39:37.826839    4225 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:39:37.826883    4225 notify.go:220] Checking for updates...
	I0917 10:39:37.827104    4225 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:39:37.827114    4225 status.go:255] checking status of multinode-404000 ...
	I0917 10:39:37.827442    4225 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:39:37.827447    4225 status.go:343] host is not running, skipping remaining checks
	I0917 10:39:37.827450    4225 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr: exit status 7 (69.170583ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:39:40.786996    4227 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:39:40.787235    4227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:40.787240    4227 out.go:358] Setting ErrFile to fd 2...
	I0917 10:39:40.787243    4227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:40.787439    4227 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:39:40.787632    4227 out.go:352] Setting JSON to false
	I0917 10:39:40.787645    4227 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:39:40.787694    4227 notify.go:220] Checking for updates...
	I0917 10:39:40.787949    4227 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:39:40.787959    4227 status.go:255] checking status of multinode-404000 ...
	I0917 10:39:40.788300    4227 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:39:40.788306    4227 status.go:343] host is not running, skipping remaining checks
	I0917 10:39:40.788309    4227 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr: exit status 7 (76.73375ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:39:46.911629    4232 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:39:46.911829    4232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:46.911833    4232 out.go:358] Setting ErrFile to fd 2...
	I0917 10:39:46.911836    4232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:46.912013    4232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:39:46.912187    4232 out.go:352] Setting JSON to false
	I0917 10:39:46.912202    4232 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:39:46.912233    4232 notify.go:220] Checking for updates...
	I0917 10:39:46.912482    4232 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:39:46.912492    4232 status.go:255] checking status of multinode-404000 ...
	I0917 10:39:46.912821    4232 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:39:46.912826    4232 status.go:343] host is not running, skipping remaining checks
	I0917 10:39:46.912829    4232 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr: exit status 7 (73.871ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:39:55.698516    4234 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:39:55.698723    4234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:55.698728    4234 out.go:358] Setting ErrFile to fd 2...
	I0917 10:39:55.698732    4234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:39:55.698898    4234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:39:55.699070    4234 out.go:352] Setting JSON to false
	I0917 10:39:55.699084    4234 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:39:55.699118    4234 notify.go:220] Checking for updates...
	I0917 10:39:55.699396    4234 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:39:55.699410    4234 status.go:255] checking status of multinode-404000 ...
	I0917 10:39:55.699726    4234 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:39:55.699732    4234 status.go:343] host is not running, skipping remaining checks
	I0917 10:39:55.699735    4234 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr: exit status 7 (75.303958ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:40:06.726898    4239 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:40:06.727096    4239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:40:06.727101    4239 out.go:358] Setting ErrFile to fd 2...
	I0917 10:40:06.727104    4239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:40:06.727277    4239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:40:06.727444    4239 out.go:352] Setting JSON to false
	I0917 10:40:06.727459    4239 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:40:06.727497    4239 notify.go:220] Checking for updates...
	I0917 10:40:06.727754    4239 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:40:06.727765    4239 status.go:255] checking status of multinode-404000 ...
	I0917 10:40:06.728077    4239 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:40:06.728082    4239 status.go:343] host is not running, skipping remaining checks
	I0917 10:40:06.728085    4239 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr: exit status 7 (75.136208ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:40:22.585530    4242 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:40:22.585735    4242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:40:22.585740    4242 out.go:358] Setting ErrFile to fd 2...
	I0917 10:40:22.585743    4242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:40:22.585926    4242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:40:22.586102    4242 out.go:352] Setting JSON to false
	I0917 10:40:22.586114    4242 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:40:22.586162    4242 notify.go:220] Checking for updates...
	I0917 10:40:22.586405    4242 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:40:22.586413    4242 status.go:255] checking status of multinode-404000 ...
	I0917 10:40:22.586741    4242 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:40:22.586746    4242 status.go:343] host is not running, skipping remaining checks
	I0917 10:40:22.586749    4242 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-404000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (33.89875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (49.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-404000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-404000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-404000: (3.872356958s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-404000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-404000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.224646791s)

                                                
                                                
-- stdout --
	* [multinode-404000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-404000" primary control-plane node in "multinode-404000" cluster
	* Restarting existing qemu2 VM for "multinode-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:40:26.589485    4268 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:40:26.589644    4268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:40:26.589648    4268 out.go:358] Setting ErrFile to fd 2...
	I0917 10:40:26.589651    4268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:40:26.589822    4268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:40:26.591075    4268 out.go:352] Setting JSON to false
	I0917 10:40:26.610307    4268 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4189,"bootTime":1726590637,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:40:26.610398    4268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:40:26.615070    4268 out.go:177] * [multinode-404000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:40:26.622104    4268 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:40:26.622189    4268 notify.go:220] Checking for updates...
	I0917 10:40:26.629010    4268 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:40:26.632022    4268 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:40:26.634972    4268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:40:26.637981    4268 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:40:26.641046    4268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:40:26.644260    4268 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:40:26.644317    4268 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:40:26.648958    4268 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:40:26.655948    4268 start.go:297] selected driver: qemu2
	I0917 10:40:26.655957    4268 start.go:901] validating driver "qemu2" against &{Name:multinode-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:40:26.656021    4268 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:40:26.658514    4268 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:40:26.658536    4268 cni.go:84] Creating CNI manager for ""
	I0917 10:40:26.658558    4268 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 10:40:26.658600    4268 start.go:340] cluster config:
	{Name:multinode-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-404000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:40:26.662367    4268 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:40:26.669871    4268 out.go:177] * Starting "multinode-404000" primary control-plane node in "multinode-404000" cluster
	I0917 10:40:26.673961    4268 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:40:26.673975    4268 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:40:26.673989    4268 cache.go:56] Caching tarball of preloaded images
	I0917 10:40:26.674042    4268 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:40:26.674048    4268 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:40:26.674100    4268 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/multinode-404000/config.json ...
	I0917 10:40:26.674541    4268 start.go:360] acquireMachinesLock for multinode-404000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:40:26.674578    4268 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "multinode-404000"
	I0917 10:40:26.674587    4268 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:40:26.674592    4268 fix.go:54] fixHost starting: 
	I0917 10:40:26.674713    4268 fix.go:112] recreateIfNeeded on multinode-404000: state=Stopped err=<nil>
	W0917 10:40:26.674721    4268 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:40:26.678864    4268 out.go:177] * Restarting existing qemu2 VM for "multinode-404000" ...
	I0917 10:40:26.686964    4268 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:40:26.687006    4268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:aa:96:09:30:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2
	I0917 10:40:26.689073    4268 main.go:141] libmachine: STDOUT: 
	I0917 10:40:26.689095    4268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:40:26.689127    4268 fix.go:56] duration metric: took 14.535083ms for fixHost
	I0917 10:40:26.689132    4268 start.go:83] releasing machines lock for "multinode-404000", held for 14.550041ms
	W0917 10:40:26.689138    4268 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:40:26.689181    4268 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:40:26.689186    4268 start.go:729] Will try again in 5 seconds ...
	I0917 10:40:31.691214    4268 start.go:360] acquireMachinesLock for multinode-404000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:40:31.691561    4268 start.go:364] duration metric: took 265.291µs to acquireMachinesLock for "multinode-404000"
	I0917 10:40:31.691684    4268 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:40:31.691702    4268 fix.go:54] fixHost starting: 
	I0917 10:40:31.692357    4268 fix.go:112] recreateIfNeeded on multinode-404000: state=Stopped err=<nil>
	W0917 10:40:31.692388    4268 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:40:31.696891    4268 out.go:177] * Restarting existing qemu2 VM for "multinode-404000" ...
	I0917 10:40:31.704771    4268 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:40:31.704990    4268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:aa:96:09:30:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2
	I0917 10:40:31.713855    4268 main.go:141] libmachine: STDOUT: 
	I0917 10:40:31.713909    4268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:40:31.713996    4268 fix.go:56] duration metric: took 22.27625ms for fixHost
	I0917 10:40:31.714015    4268 start.go:83] releasing machines lock for "multinode-404000", held for 22.435375ms
	W0917 10:40:31.714215    4268 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:40:31.721750    4268 out.go:201] 
	W0917 10:40:31.725884    4268 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:40:31.725941    4268 out.go:270] * 
	* 
	W0917 10:40:31.728362    4268 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:40:31.735758    4268 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-404000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-404000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (32.75925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 node delete m03: exit status 83 (40.674625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-404000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-404000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-404000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status --alsologtostderr: exit status 7 (29.74625ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:40:31.921250    4282 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:40:31.921422    4282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:40:31.921425    4282 out.go:358] Setting ErrFile to fd 2...
	I0917 10:40:31.921427    4282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:40:31.921553    4282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:40:31.921679    4282 out.go:352] Setting JSON to false
	I0917 10:40:31.921689    4282 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:40:31.921753    4282 notify.go:220] Checking for updates...
	I0917 10:40:31.921910    4282 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:40:31.921918    4282 status.go:255] checking status of multinode-404000 ...
	I0917 10:40:31.922145    4282 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:40:31.922149    4282 status.go:343] host is not running, skipping remaining checks
	I0917 10:40:31.922151    4282 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-404000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (29.827291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-404000 stop: (2.955629583s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status: exit status 7 (67.643667ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-404000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-404000 status --alsologtostderr: exit status 7 (33.527042ms)

                                                
                                                
-- stdout --
	multinode-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:40:35.008467    4306 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:40:35.008638    4306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:40:35.008642    4306 out.go:358] Setting ErrFile to fd 2...
	I0917 10:40:35.008644    4306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:40:35.008767    4306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:40:35.008886    4306 out.go:352] Setting JSON to false
	I0917 10:40:35.008896    4306 mustload.go:65] Loading cluster: multinode-404000
	I0917 10:40:35.008944    4306 notify.go:220] Checking for updates...
	I0917 10:40:35.009099    4306 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:40:35.009105    4306 status.go:255] checking status of multinode-404000 ...
	I0917 10:40:35.009356    4306 status.go:330] multinode-404000 host status = "Stopped" (err=<nil>)
	I0917 10:40:35.009360    4306 status.go:343] host is not running, skipping remaining checks
	I0917 10:40:35.009362    4306 status.go:257] multinode-404000 status: &{Name:multinode-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-404000 status --alsologtostderr": multinode-404000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-404000 status --alsologtostderr": multinode-404000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (30.637708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-404000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-404000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.18794825s)

                                                
                                                
-- stdout --
	* [multinode-404000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-404000" primary control-plane node in "multinode-404000" cluster
	* Restarting existing qemu2 VM for "multinode-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:40:35.069172    4310 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:40:35.069296    4310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:40:35.069299    4310 out.go:358] Setting ErrFile to fd 2...
	I0917 10:40:35.069302    4310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:40:35.069434    4310 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:40:35.070458    4310 out.go:352] Setting JSON to false
	I0917 10:40:35.086667    4310 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4198,"bootTime":1726590637,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:40:35.086759    4310 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:40:35.091474    4310 out.go:177] * [multinode-404000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:40:35.099460    4310 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:40:35.099525    4310 notify.go:220] Checking for updates...
	I0917 10:40:35.106391    4310 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:40:35.109406    4310 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:40:35.112339    4310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:40:35.115399    4310 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:40:35.118404    4310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:40:35.121624    4310 config.go:182] Loaded profile config "multinode-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:40:35.121881    4310 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:40:35.125377    4310 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:40:35.132361    4310 start.go:297] selected driver: qemu2
	I0917 10:40:35.132367    4310 start.go:901] validating driver "qemu2" against &{Name:multinode-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:40:35.132421    4310 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:40:35.134811    4310 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:40:35.134835    4310 cni.go:84] Creating CNI manager for ""
	I0917 10:40:35.134866    4310 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 10:40:35.134918    4310 start.go:340] cluster config:
	{Name:multinode-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-404000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:40:35.138722    4310 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:40:35.146401    4310 out.go:177] * Starting "multinode-404000" primary control-plane node in "multinode-404000" cluster
	I0917 10:40:35.150368    4310 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:40:35.150385    4310 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:40:35.150394    4310 cache.go:56] Caching tarball of preloaded images
	I0917 10:40:35.150449    4310 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:40:35.150463    4310 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:40:35.150522    4310 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/multinode-404000/config.json ...
	I0917 10:40:35.150977    4310 start.go:360] acquireMachinesLock for multinode-404000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:40:35.151006    4310 start.go:364] duration metric: took 22.75µs to acquireMachinesLock for "multinode-404000"
	I0917 10:40:35.151015    4310 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:40:35.151021    4310 fix.go:54] fixHost starting: 
	I0917 10:40:35.151143    4310 fix.go:112] recreateIfNeeded on multinode-404000: state=Stopped err=<nil>
	W0917 10:40:35.151154    4310 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:40:35.159376    4310 out.go:177] * Restarting existing qemu2 VM for "multinode-404000" ...
	I0917 10:40:35.163373    4310 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:40:35.163413    4310 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:aa:96:09:30:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2
	I0917 10:40:35.165420    4310 main.go:141] libmachine: STDOUT: 
	I0917 10:40:35.165441    4310 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:40:35.165471    4310 fix.go:56] duration metric: took 14.4515ms for fixHost
	I0917 10:40:35.165477    4310 start.go:83] releasing machines lock for "multinode-404000", held for 14.467666ms
	W0917 10:40:35.165483    4310 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:40:35.165513    4310 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:40:35.165518    4310 start.go:729] Will try again in 5 seconds ...
	I0917 10:40:40.166912    4310 start.go:360] acquireMachinesLock for multinode-404000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:40:40.167462    4310 start.go:364] duration metric: took 444.083µs to acquireMachinesLock for "multinode-404000"
	I0917 10:40:40.167619    4310 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:40:40.167639    4310 fix.go:54] fixHost starting: 
	I0917 10:40:40.168359    4310 fix.go:112] recreateIfNeeded on multinode-404000: state=Stopped err=<nil>
	W0917 10:40:40.168387    4310 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:40:40.172888    4310 out.go:177] * Restarting existing qemu2 VM for "multinode-404000" ...
	I0917 10:40:40.180865    4310 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:40:40.181095    4310 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:aa:96:09:30:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/multinode-404000/disk.qcow2
	I0917 10:40:40.190876    4310 main.go:141] libmachine: STDOUT: 
	I0917 10:40:40.190947    4310 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:40:40.191054    4310 fix.go:56] duration metric: took 23.414041ms for fixHost
	I0917 10:40:40.191079    4310 start.go:83] releasing machines lock for "multinode-404000", held for 23.593208ms
	W0917 10:40:40.191252    4310 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:40:40.201916    4310 out.go:201] 
	W0917 10:40:40.206001    4310 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:40:40.206073    4310 out.go:270] * 
	* 
	W0917 10:40:40.208865    4310 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:40:40.216861    4310 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-404000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (72.692333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-404000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-404000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-404000-m01 --driver=qemu2 : exit status 80 (10.189935291s)

                                                
                                                
-- stdout --
	* [multinode-404000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-404000-m01" primary control-plane node in "multinode-404000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-404000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-404000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-404000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-404000-m02 --driver=qemu2 : exit status 80 (9.989002791s)

                                                
                                                
-- stdout --
	* [multinode-404000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-404000-m02" primary control-plane node in "multinode-404000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-404000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-404000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-404000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-404000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-404000: exit status 83 (80.554875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-404000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-404000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-404000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-404000 -n multinode-404000: exit status 7 (31.050666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.41s)

                                                
                                    
x
+
TestPreload (10.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-431000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-431000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.872158375s)

                                                
                                                
-- stdout --
	* [test-preload-431000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-431000" primary control-plane node in "test-preload-431000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-431000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:41:00.856800    4367 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:41:00.857170    4367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:41:00.857176    4367 out.go:358] Setting ErrFile to fd 2...
	I0917 10:41:00.857178    4367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:41:00.857365    4367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:41:00.858793    4367 out.go:352] Setting JSON to false
	I0917 10:41:00.875292    4367 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4223,"bootTime":1726590637,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:41:00.875387    4367 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:41:00.881576    4367 out.go:177] * [test-preload-431000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:41:00.890354    4367 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:41:00.890454    4367 notify.go:220] Checking for updates...
	I0917 10:41:00.898417    4367 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:41:00.899975    4367 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:41:00.903460    4367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:41:00.906481    4367 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:41:00.909482    4367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:41:00.912776    4367 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:41:00.912857    4367 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:41:00.916422    4367 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:41:00.923449    4367 start.go:297] selected driver: qemu2
	I0917 10:41:00.923459    4367 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:41:00.923466    4367 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:41:00.925972    4367 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:41:00.929465    4367 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:41:00.932499    4367 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:41:00.932516    4367 cni.go:84] Creating CNI manager for ""
	I0917 10:41:00.932536    4367 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:41:00.932544    4367 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:41:00.932565    4367 start.go:340] cluster config:
	{Name:test-preload-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:41:00.936417    4367 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:00.943441    4367 out.go:177] * Starting "test-preload-431000" primary control-plane node in "test-preload-431000" cluster
	I0917 10:41:00.946415    4367 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0917 10:41:00.946522    4367 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/test-preload-431000/config.json ...
	I0917 10:41:00.946520    4367 cache.go:107] acquiring lock: {Name:mkdc12a93d9deba88b8d1060e8a60dfdaeded8a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:00.946543    4367 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/test-preload-431000/config.json: {Name:mk4351fe609d728587b1096bec13e349cdb3c2a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:41:00.946538    4367 cache.go:107] acquiring lock: {Name:mkcc7087351f593570c7b350129484e6a90ada61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:00.946534    4367 cache.go:107] acquiring lock: {Name:mka48afe9174ecb2bbb0323dac61373a472b5454 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:00.946570    4367 cache.go:107] acquiring lock: {Name:mk0021faa11ee129c7230f3ad9dc9aebccdd1ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:00.946725    4367 cache.go:107] acquiring lock: {Name:mk6009b651cd215e9b9d656c53e3a3a25136d3c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:00.946748    4367 cache.go:107] acquiring lock: {Name:mk4fa037646cf7fd77c5133cd888b8465da6b44c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:00.946798    4367 cache.go:107] acquiring lock: {Name:mkf2bfa6668e1cb5689b51e348627f18a38a06d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:00.946846    4367 start.go:360] acquireMachinesLock for test-preload-431000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:41:00.946862    4367 cache.go:107] acquiring lock: {Name:mka526cc1ac7dd2922ba67b931a8b5155c0b72c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:41:00.946899    4367 start.go:364] duration metric: took 41.75µs to acquireMachinesLock for "test-preload-431000"
	I0917 10:41:00.946956    4367 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0917 10:41:00.946980    4367 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0917 10:41:00.947009    4367 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0917 10:41:00.947025    4367 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0917 10:41:00.947040    4367 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0917 10:41:00.947043    4367 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0917 10:41:00.946911    4367 start.go:93] Provisioning new machine with config: &{Name:test-preload-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:41:00.947106    4367 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:41:00.947009    4367 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:41:00.947200    4367 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:41:00.950499    4367 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:41:00.958216    4367 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0917 10:41:00.958921    4367 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0917 10:41:00.961151    4367 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0917 10:41:00.961426    4367 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:41:00.961560    4367 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:41:00.961516    4367 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0917 10:41:00.961799    4367 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0917 10:41:00.961903    4367 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0917 10:41:00.967247    4367 start.go:159] libmachine.API.Create for "test-preload-431000" (driver="qemu2")
	I0917 10:41:00.967272    4367 client.go:168] LocalClient.Create starting
	I0917 10:41:00.967361    4367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:41:00.967392    4367 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:00.967401    4367 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:00.967446    4367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:41:00.967473    4367 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:00.967480    4367 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:00.967831    4367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:41:01.125451    4367 main.go:141] libmachine: Creating SSH key...
	I0917 10:41:01.282957    4367 main.go:141] libmachine: Creating Disk image...
	I0917 10:41:01.282991    4367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:41:01.283216    4367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/disk.qcow2
	I0917 10:41:01.292856    4367 main.go:141] libmachine: STDOUT: 
	I0917 10:41:01.292883    4367 main.go:141] libmachine: STDERR: 
	I0917 10:41:01.292947    4367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/disk.qcow2 +20000M
	I0917 10:41:01.302189    4367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:41:01.302206    4367 main.go:141] libmachine: STDERR: 
	I0917 10:41:01.302224    4367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/disk.qcow2
	I0917 10:41:01.302230    4367 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:41:01.302247    4367 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:41:01.302275    4367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:ad:fb:ed:20:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/disk.qcow2
	I0917 10:41:01.304244    4367 main.go:141] libmachine: STDOUT: 
	I0917 10:41:01.304259    4367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:41:01.304277    4367 client.go:171] duration metric: took 337.010625ms to LocalClient.Create
	I0917 10:41:01.385556    4367 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0917 10:41:01.422651    4367 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0917 10:41:01.440483    4367 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0917 10:41:01.477251    4367 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0917 10:41:01.496295    4367 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0917 10:41:01.496333    4367 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0917 10:41:01.530140    4367 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0917 10:41:01.554884    4367 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0917 10:41:01.639018    4367 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0917 10:41:01.639090    4367 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 692.375584ms
	I0917 10:41:01.639134    4367 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0917 10:41:02.046081    4367 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0917 10:41:02.046173    4367 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 10:41:02.458473    4367 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0917 10:41:02.458520    4367 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 1.511824625s
	I0917 10:41:02.458544    4367 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0917 10:41:03.006954    4367 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0917 10:41:03.006987    4367 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.060535625s
	I0917 10:41:03.007003    4367 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0917 10:41:03.304542    4367 start.go:128] duration metric: took 2.357474375s to createHost
	I0917 10:41:03.304606    4367 start.go:83] releasing machines lock for "test-preload-431000", held for 2.35777025s
	W0917 10:41:03.304649    4367 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:03.313823    4367 out.go:177] * Deleting "test-preload-431000" in qemu2 ...
	W0917 10:41:03.346664    4367 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:03.346684    4367 start.go:729] Will try again in 5 seconds ...
	I0917 10:41:03.572252    4367 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0917 10:41:03.572325    4367 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.625703584s
	I0917 10:41:03.572354    4367 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0917 10:41:05.519233    4367 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0917 10:41:05.519286    4367 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.572578125s
	I0917 10:41:05.519314    4367 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0917 10:41:06.476129    4367 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0917 10:41:06.476181    4367 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.52983s
	I0917 10:41:06.476205    4367 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0917 10:41:07.729370    4367 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0917 10:41:07.729417    4367 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.783101167s
	I0917 10:41:07.729444    4367 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0917 10:41:08.346822    4367 start.go:360] acquireMachinesLock for test-preload-431000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:41:08.347266    4367 start.go:364] duration metric: took 369.959µs to acquireMachinesLock for "test-preload-431000"
	I0917 10:41:08.347391    4367 start.go:93] Provisioning new machine with config: &{Name:test-preload-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:41:08.347640    4367 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:41:08.354250    4367 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:41:08.407116    4367 start.go:159] libmachine.API.Create for "test-preload-431000" (driver="qemu2")
	I0917 10:41:08.407164    4367 client.go:168] LocalClient.Create starting
	I0917 10:41:08.407261    4367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:41:08.407343    4367 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:08.407359    4367 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:08.407415    4367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:41:08.407458    4367 main.go:141] libmachine: Decoding PEM data...
	I0917 10:41:08.407470    4367 main.go:141] libmachine: Parsing certificate...
	I0917 10:41:08.408009    4367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:41:08.582167    4367 main.go:141] libmachine: Creating SSH key...
	I0917 10:41:08.632110    4367 main.go:141] libmachine: Creating Disk image...
	I0917 10:41:08.632121    4367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:41:08.632311    4367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/disk.qcow2
	I0917 10:41:08.641626    4367 main.go:141] libmachine: STDOUT: 
	I0917 10:41:08.641648    4367 main.go:141] libmachine: STDERR: 
	I0917 10:41:08.641721    4367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/disk.qcow2 +20000M
	I0917 10:41:08.649867    4367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:41:08.649880    4367 main.go:141] libmachine: STDERR: 
	I0917 10:41:08.649891    4367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/disk.qcow2
	I0917 10:41:08.649897    4367 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:41:08.649910    4367 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:41:08.649947    4367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:6b:da:fe:b4:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/test-preload-431000/disk.qcow2
	I0917 10:41:08.651745    4367 main.go:141] libmachine: STDOUT: 
	I0917 10:41:08.651757    4367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:41:08.651773    4367 client.go:171] duration metric: took 244.611083ms to LocalClient.Create
	I0917 10:41:10.083695    4367 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0917 10:41:10.083796    4367 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.137476s
	I0917 10:41:10.083832    4367 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0917 10:41:10.083870    4367 cache.go:87] Successfully saved all images to host disk.
	I0917 10:41:10.652584    4367 start.go:128] duration metric: took 2.304974625s to createHost
	I0917 10:41:10.652977    4367 start.go:83] releasing machines lock for "test-preload-431000", held for 2.305755167s
	W0917 10:41:10.653285    4367 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:41:10.663031    4367 out.go:201] 
	W0917 10:41:10.675201    4367 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:41:10.675266    4367 out.go:270] * 
	* 
	W0917 10:41:10.677992    4367 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:41:10.686994    4367 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-431000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-17 10:41:10.703165 -0700 PDT m=+2749.725697543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-431000 -n test-preload-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-431000 -n test-preload-431000: exit status 7 (66.225917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-431000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-431000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-431000
--- FAIL: TestPreload (10.03s)

                                                
                                    
x
+
TestScheduledStopUnix (10.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-704000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-704000 --memory=2048 --driver=qemu2 : exit status 80 (10.00823925s)

                                                
                                                
-- stdout --
	* [scheduled-stop-704000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-704000" primary control-plane node in "scheduled-stop-704000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-704000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-704000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-704000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-704000" primary control-plane node in "scheduled-stop-704000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-704000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-704000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-17 10:41:20.864314 -0700 PDT m=+2759.887161584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-704000 -n scheduled-stop-704000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-704000 -n scheduled-stop-704000: exit status 7 (68.635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-704000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-704000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-704000
--- FAIL: TestScheduledStopUnix (10.16s)

                                                
                                    
x
+
TestSkaffold (12.4s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3682015908 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3682015908 version: (1.069645875s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-890000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-890000 --memory=2600 --driver=qemu2 : exit status 80 (9.986259125s)

                                                
                                                
-- stdout --
	* [skaffold-890000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-890000" primary control-plane node in "skaffold-890000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-890000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-890000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-890000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-890000" primary control-plane node in "skaffold-890000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-890000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-890000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-17 10:41:33.272977 -0700 PDT m=+2772.296207459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-890000 -n skaffold-890000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-890000 -n skaffold-890000: exit status 7 (64.172459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-890000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-890000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-890000
--- FAIL: TestSkaffold (12.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (597.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2950948164 start -p running-upgrade-161000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2950948164 start -p running-upgrade-161000 --memory=2200 --vm-driver=qemu2 : (57.825276375s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-161000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0917 10:43:42.256152    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:44:06.411001    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-161000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.916961833s)

                                                
                                                
-- stdout --
	* [running-upgrade-161000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-161000" primary control-plane node in "running-upgrade-161000" cluster
	* Updating the running qemu2 "running-upgrade-161000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:43:15.101633    4746 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:43:15.101777    4746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:43:15.101781    4746 out.go:358] Setting ErrFile to fd 2...
	I0917 10:43:15.101783    4746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:43:15.101906    4746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:43:15.103006    4746 out.go:352] Setting JSON to false
	I0917 10:43:15.119404    4746 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4358,"bootTime":1726590637,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:43:15.119471    4746 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:43:15.123389    4746 out.go:177] * [running-upgrade-161000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:43:15.129431    4746 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:43:15.129522    4746 notify.go:220] Checking for updates...
	I0917 10:43:15.137372    4746 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:43:15.141428    4746 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:43:15.144368    4746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:43:15.147406    4746 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:43:15.150425    4746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:43:15.153670    4746 config.go:182] Loaded profile config "running-upgrade-161000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:43:15.157332    4746 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 10:43:15.160394    4746 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:43:15.164346    4746 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:43:15.171430    4746 start.go:297] selected driver: qemu2
	I0917 10:43:15.171438    4746 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-161000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50299 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-161000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 10:43:15.171499    4746 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:43:15.173868    4746 cni.go:84] Creating CNI manager for ""
	I0917 10:43:15.173905    4746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:43:15.173932    4746 start.go:340] cluster config:
	{Name:running-upgrade-161000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50299 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-161000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 10:43:15.173982    4746 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:43:15.181395    4746 out.go:177] * Starting "running-upgrade-161000" primary control-plane node in "running-upgrade-161000" cluster
	I0917 10:43:15.185278    4746 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0917 10:43:15.185294    4746 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0917 10:43:15.185300    4746 cache.go:56] Caching tarball of preloaded images
	I0917 10:43:15.185355    4746 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:43:15.185360    4746 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0917 10:43:15.185424    4746 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/config.json ...
	I0917 10:43:15.185881    4746 start.go:360] acquireMachinesLock for running-upgrade-161000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:43:15.185910    4746 start.go:364] duration metric: took 22.375µs to acquireMachinesLock for "running-upgrade-161000"
	I0917 10:43:15.185917    4746 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:43:15.185924    4746 fix.go:54] fixHost starting: 
	I0917 10:43:15.186575    4746 fix.go:112] recreateIfNeeded on running-upgrade-161000: state=Running err=<nil>
	W0917 10:43:15.186584    4746 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:43:15.195240    4746 out.go:177] * Updating the running qemu2 "running-upgrade-161000" VM ...
	I0917 10:43:15.199355    4746 machine.go:93] provisionDockerMachine start ...
	I0917 10:43:15.199393    4746 main.go:141] libmachine: Using SSH client type: native
	I0917 10:43:15.199503    4746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e19190] 0x102e1b9d0 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0917 10:43:15.199508    4746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:43:15.251151    4746 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-161000
	
	I0917 10:43:15.251164    4746 buildroot.go:166] provisioning hostname "running-upgrade-161000"
	I0917 10:43:15.251205    4746 main.go:141] libmachine: Using SSH client type: native
	I0917 10:43:15.251299    4746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e19190] 0x102e1b9d0 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0917 10:43:15.251305    4746 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-161000 && echo "running-upgrade-161000" | sudo tee /etc/hostname
	I0917 10:43:15.304601    4746 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-161000
	
	I0917 10:43:15.304658    4746 main.go:141] libmachine: Using SSH client type: native
	I0917 10:43:15.304772    4746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e19190] 0x102e1b9d0 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0917 10:43:15.304783    4746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-161000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-161000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-161000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:43:15.356122    4746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:43:15.356132    4746 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1312/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1312/.minikube}
	I0917 10:43:15.356147    4746 buildroot.go:174] setting up certificates
	I0917 10:43:15.356154    4746 provision.go:84] configureAuth start
	I0917 10:43:15.356158    4746 provision.go:143] copyHostCerts
	I0917 10:43:15.356213    4746 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.pem, removing ...
	I0917 10:43:15.356219    4746 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.pem
	I0917 10:43:15.356347    4746 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.pem (1078 bytes)
	I0917 10:43:15.356520    4746 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1312/.minikube/cert.pem, removing ...
	I0917 10:43:15.356524    4746 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1312/.minikube/cert.pem
	I0917 10:43:15.356573    4746 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1312/.minikube/cert.pem (1123 bytes)
	I0917 10:43:15.356673    4746 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1312/.minikube/key.pem, removing ...
	I0917 10:43:15.356676    4746 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1312/.minikube/key.pem
	I0917 10:43:15.356713    4746 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1312/.minikube/key.pem (1679 bytes)
	I0917 10:43:15.356803    4746 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-161000 san=[127.0.0.1 localhost minikube running-upgrade-161000]
	I0917 10:43:15.403459    4746 provision.go:177] copyRemoteCerts
	I0917 10:43:15.403494    4746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:43:15.403501    4746 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/running-upgrade-161000/id_rsa Username:docker}
	I0917 10:43:15.430430    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:43:15.437641    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 10:43:15.444467    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:43:15.451057    4746 provision.go:87] duration metric: took 94.897833ms to configureAuth
	I0917 10:43:15.451065    4746 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:43:15.451170    4746 config.go:182] Loaded profile config "running-upgrade-161000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:43:15.451205    4746 main.go:141] libmachine: Using SSH client type: native
	I0917 10:43:15.451300    4746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e19190] 0x102e1b9d0 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0917 10:43:15.451304    4746 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:43:15.500400    4746 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:43:15.500411    4746 buildroot.go:70] root file system type: tmpfs
	I0917 10:43:15.500471    4746 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:43:15.500532    4746 main.go:141] libmachine: Using SSH client type: native
	I0917 10:43:15.500676    4746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e19190] 0x102e1b9d0 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0917 10:43:15.500711    4746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:43:15.555987    4746 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:43:15.556041    4746 main.go:141] libmachine: Using SSH client type: native
	I0917 10:43:15.556139    4746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e19190] 0x102e1b9d0 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0917 10:43:15.556148    4746 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:43:15.607917    4746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:43:15.607933    4746 machine.go:96] duration metric: took 408.578666ms to provisionDockerMachine
	I0917 10:43:15.607939    4746 start.go:293] postStartSetup for "running-upgrade-161000" (driver="qemu2")
	I0917 10:43:15.607948    4746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:43:15.608002    4746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:43:15.608010    4746 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/running-upgrade-161000/id_rsa Username:docker}
	I0917 10:43:15.634422    4746 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:43:15.635813    4746 info.go:137] Remote host: Buildroot 2021.02.12
	I0917 10:43:15.635821    4746 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1312/.minikube/addons for local assets ...
	I0917 10:43:15.635885    4746 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1312/.minikube/files for local assets ...
	I0917 10:43:15.635973    4746 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/ssl/certs/18402.pem -> 18402.pem in /etc/ssl/certs
	I0917 10:43:15.636068    4746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:43:15.638953    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/ssl/certs/18402.pem --> /etc/ssl/certs/18402.pem (1708 bytes)
	I0917 10:43:15.645522    4746 start.go:296] duration metric: took 37.576083ms for postStartSetup
	I0917 10:43:15.645535    4746 fix.go:56] duration metric: took 459.62875ms for fixHost
	I0917 10:43:15.645571    4746 main.go:141] libmachine: Using SSH client type: native
	I0917 10:43:15.645669    4746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e19190] 0x102e1b9d0 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0917 10:43:15.645674    4746 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:43:15.693196    4746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594995.899078179
	
	I0917 10:43:15.693205    4746 fix.go:216] guest clock: 1726594995.899078179
	I0917 10:43:15.693210    4746 fix.go:229] Guest: 2024-09-17 10:43:15.899078179 -0700 PDT Remote: 2024-09-17 10:43:15.645537 -0700 PDT m=+0.564817126 (delta=253.541179ms)
	I0917 10:43:15.693221    4746 fix.go:200] guest clock delta is within tolerance: 253.541179ms
	I0917 10:43:15.693225    4746 start.go:83] releasing machines lock for "running-upgrade-161000", held for 507.3265ms
	I0917 10:43:15.693297    4746 ssh_runner.go:195] Run: cat /version.json
	I0917 10:43:15.693307    4746 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/running-upgrade-161000/id_rsa Username:docker}
	I0917 10:43:15.693297    4746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:43:15.693345    4746 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/running-upgrade-161000/id_rsa Username:docker}
	W0917 10:43:15.693870    4746 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50267: connect: connection refused
	I0917 10:43:15.693889    4746 retry.go:31] will retry after 175.80441ms: dial tcp [::1]:50267: connect: connection refused
	W0917 10:43:15.897926    4746 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0917 10:43:15.897982    4746 ssh_runner.go:195] Run: systemctl --version
	I0917 10:43:15.899862    4746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 10:43:15.901714    4746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:43:15.901739    4746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0917 10:43:15.904558    4746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0917 10:43:15.908818    4746 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:43:15.908825    4746 start.go:495] detecting cgroup driver to use...
	I0917 10:43:15.908891    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:43:15.914226    4746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0917 10:43:15.917715    4746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:43:15.920854    4746 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:43:15.920884    4746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:43:15.923800    4746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:43:15.927009    4746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:43:15.930022    4746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:43:15.933264    4746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:43:15.936083    4746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:43:15.939036    4746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:43:15.942383    4746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:43:15.946082    4746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:43:15.948910    4746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:43:15.951558    4746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:43:16.048848    4746 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:43:16.059525    4746 start.go:495] detecting cgroup driver to use...
	I0917 10:43:16.059595    4746 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:43:16.068009    4746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:43:16.072753    4746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:43:16.084093    4746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:43:16.088942    4746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:43:16.093499    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:43:16.098923    4746 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:43:16.100268    4746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:43:16.102908    4746 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0917 10:43:16.108097    4746 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:43:16.201820    4746 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:43:16.294152    4746 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:43:16.294216    4746 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:43:16.299660    4746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:43:16.393608    4746 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:43:18.064837    4746 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.671265208s)
	I0917 10:43:18.064925    4746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:43:18.069228    4746 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:43:18.075151    4746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:43:18.080500    4746 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:43:18.170875    4746 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:43:18.255136    4746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:43:18.339490    4746 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:43:18.345887    4746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:43:18.350495    4746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:43:18.434236    4746 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:43:18.477582    4746 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:43:18.477674    4746 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:43:18.480219    4746 start.go:563] Will wait 60s for crictl version
	I0917 10:43:18.480273    4746 ssh_runner.go:195] Run: which crictl
	I0917 10:43:18.481844    4746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:43:18.493088    4746 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0917 10:43:18.493176    4746 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:43:18.505676    4746 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:43:18.526182    4746 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0917 10:43:18.526259    4746 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0917 10:43:18.527651    4746 kubeadm.go:883] updating cluster {Name:running-upgrade-161000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50299 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-161000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0917 10:43:18.527693    4746 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0917 10:43:18.527742    4746 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:43:18.537882    4746 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 10:43:18.537889    4746 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0917 10:43:18.537934    4746 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 10:43:18.541144    4746 ssh_runner.go:195] Run: which lz4
	I0917 10:43:18.542394    4746 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 10:43:18.543637    4746 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 10:43:18.543646    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0917 10:43:19.520235    4746 docker.go:649] duration metric: took 977.912792ms to copy over tarball
	I0917 10:43:19.520308    4746 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 10:43:21.008991    4746 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.488716375s)
	I0917 10:43:21.009003    4746 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 10:43:21.036220    4746 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 10:43:21.043775    4746 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0917 10:43:21.057783    4746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:43:21.159907    4746 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:43:24.233885    4746 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.074054667s)
	I0917 10:43:24.234007    4746 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:43:24.248653    4746 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 10:43:24.248660    4746 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0917 10:43:24.248665    4746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 10:43:24.253855    4746 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:43:24.256264    4746 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:43:24.257578    4746 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:43:24.258098    4746 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:43:24.258976    4746 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:43:24.259046    4746 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:43:24.260323    4746 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0917 10:43:24.261212    4746 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:43:24.261412    4746 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:43:24.261433    4746 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0917 10:43:24.262558    4746 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:43:24.262600    4746 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0917 10:43:24.263858    4746 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:43:24.263929    4746 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0917 10:43:24.265147    4746 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:43:24.265798    4746 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:43:24.687029    4746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:43:24.702260    4746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0917 10:43:24.702288    4746 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:43:24.702359    4746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:43:24.712887    4746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:43:24.715177    4746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0917 10:43:24.717211    4746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:43:24.727062    4746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0917 10:43:24.727082    4746 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:43:24.727142    4746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:43:24.729515    4746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0917 10:43:24.737149    4746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0917 10:43:24.737168    4746 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:43:24.737230    4746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:43:24.741108    4746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0917 10:43:24.752407    4746 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0917 10:43:24.752429    4746 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0917 10:43:24.752494    4746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0917 10:43:24.754683    4746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0917 10:43:24.755810    4746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0917 10:43:24.765195    4746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0917 10:43:24.765321    4746 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0917 10:43:24.769061    4746 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0917 10:43:24.769079    4746 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0917 10:43:24.769063    4746 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0917 10:43:24.769111    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0917 10:43:24.769130    4746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0917 10:43:24.785958    4746 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0917 10:43:24.786115    4746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:43:24.786931    4746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:43:24.788388    4746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0917 10:43:24.788500    4746 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0917 10:43:24.812203    4746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0917 10:43:24.812231    4746 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:43:24.812293    4746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:43:24.824191    4746 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0917 10:43:24.824217    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0917 10:43:24.830194    4746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0917 10:43:24.830216    4746 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:43:24.830280    4746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:43:24.846912    4746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0917 10:43:24.847048    4746 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0917 10:43:24.862297    4746 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0917 10:43:24.862310    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0917 10:43:24.873857    4746 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0917 10:43:24.873863    4746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0917 10:43:24.873882    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0917 10:43:24.940846    4746 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0917 10:43:24.979536    4746 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0917 10:43:24.979552    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0917 10:43:25.086917    4746 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0917 10:43:25.124115    4746 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0917 10:43:25.124226    4746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:43:25.127795    4746 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0917 10:43:25.127805    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0917 10:43:25.141282    4746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0917 10:43:25.141310    4746 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:43:25.141389    4746 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:43:25.257649    4746 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0917 10:43:25.257677    4746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 10:43:25.257811    4746 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0917 10:43:25.259340    4746 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0917 10:43:25.259351    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0917 10:43:25.290666    4746 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0917 10:43:25.290688    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0917 10:43:25.526973    4746 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0917 10:43:25.527012    4746 cache_images.go:92] duration metric: took 1.278379917s to LoadCachedImages
	W0917 10:43:25.527051    4746 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0917 10:43:25.527056    4746 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0917 10:43:25.527111    4746 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-161000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-161000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:43:25.527189    4746 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 10:43:25.540411    4746 cni.go:84] Creating CNI manager for ""
	I0917 10:43:25.540429    4746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:43:25.540436    4746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 10:43:25.540446    4746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-161000 NodeName:running-upgrade-161000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 10:43:25.540518    4746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-161000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 10:43:25.540585    4746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0917 10:43:25.543528    4746 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:43:25.543559    4746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 10:43:25.546454    4746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0917 10:43:25.551502    4746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:43:25.556464    4746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0917 10:43:25.561595    4746 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0917 10:43:25.562676    4746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:43:25.654825    4746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:43:25.660042    4746 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000 for IP: 10.0.2.15
	I0917 10:43:25.660048    4746 certs.go:194] generating shared ca certs ...
	I0917 10:43:25.660056    4746 certs.go:226] acquiring lock for ca certs: {Name:mk1d9837d65f8f1762ad8daf2cfbb53face1f201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:43:25.660227    4746 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.key
	I0917 10:43:25.660262    4746 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.key
	I0917 10:43:25.660267    4746 certs.go:256] generating profile certs ...
	I0917 10:43:25.660324    4746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/client.key
	I0917 10:43:25.660340    4746 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/apiserver.key.02df7198
	I0917 10:43:25.660354    4746 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/apiserver.crt.02df7198 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0917 10:43:25.709377    4746 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/apiserver.crt.02df7198 ...
	I0917 10:43:25.709381    4746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/apiserver.crt.02df7198: {Name:mk45a7939ee6e8437b2d8e40a2c5d383a6365754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:43:25.712569    4746 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/apiserver.key.02df7198 ...
	I0917 10:43:25.712575    4746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/apiserver.key.02df7198: {Name:mk662336801799c3c6774a404bc633a95ec8b79f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:43:25.712715    4746 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/apiserver.crt.02df7198 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/apiserver.crt
	I0917 10:43:25.712843    4746 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/apiserver.key.02df7198 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/apiserver.key
	I0917 10:43:25.712967    4746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/proxy-client.key
	I0917 10:43:25.713087    4746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/1840.pem (1338 bytes)
	W0917 10:43:25.713111    4746 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/1840_empty.pem, impossibly tiny 0 bytes
	I0917 10:43:25.713116    4746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:43:25.713159    4746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:43:25.713182    4746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:43:25.713204    4746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/key.pem (1679 bytes)
	I0917 10:43:25.713252    4746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/ssl/certs/18402.pem (1708 bytes)
	I0917 10:43:25.713587    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:43:25.722898    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:43:25.730429    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:43:25.737896    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:43:25.744962    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 10:43:25.751565    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:43:25.759681    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:43:25.773584    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 10:43:25.783112    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/ssl/certs/18402.pem --> /usr/share/ca-certificates/18402.pem (1708 bytes)
	I0917 10:43:25.793960    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:43:25.801035    4746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/1840.pem --> /usr/share/ca-certificates/1840.pem (1338 bytes)
	I0917 10:43:25.807531    4746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 10:43:25.812653    4746 ssh_runner.go:195] Run: openssl version
	I0917 10:43:25.814374    4746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18402.pem && ln -fs /usr/share/ca-certificates/18402.pem /etc/ssl/certs/18402.pem"
	I0917 10:43:25.817637    4746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18402.pem
	I0917 10:43:25.819044    4746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:11 /usr/share/ca-certificates/18402.pem
	I0917 10:43:25.819069    4746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18402.pem
	I0917 10:43:25.820815    4746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18402.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:43:25.823465    4746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:43:25.826824    4746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:43:25.828307    4746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:43:25.828330    4746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:43:25.830051    4746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:43:25.833107    4746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1840.pem && ln -fs /usr/share/ca-certificates/1840.pem /etc/ssl/certs/1840.pem"
	I0917 10:43:25.836090    4746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1840.pem
	I0917 10:43:25.837586    4746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:11 /usr/share/ca-certificates/1840.pem
	I0917 10:43:25.837610    4746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1840.pem
	I0917 10:43:25.839444    4746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1840.pem /etc/ssl/certs/51391683.0"
	I0917 10:43:25.842484    4746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:43:25.843886    4746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:43:25.845572    4746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:43:25.847368    4746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:43:25.849207    4746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:43:25.850972    4746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:43:25.852725    4746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:43:25.854687    4746 kubeadm.go:392] StartCluster: {Name:running-upgrade-161000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50299 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-161000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 10:43:25.854759    4746 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:43:25.876400    4746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 10:43:25.879514    4746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 10:43:25.879526    4746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 10:43:25.879551    4746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 10:43:25.882631    4746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:43:25.882861    4746 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-161000" does not appear in /Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:43:25.882912    4746 kubeconfig.go:62] /Users/jenkins/minikube-integration/19662-1312/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-161000" cluster setting kubeconfig missing "running-upgrade-161000" context setting]
	I0917 10:43:25.883049    4746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/kubeconfig: {Name:mk31f3a4e5ba5b55f1c245ae17bd3947ee606141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:43:25.884408    4746 kapi.go:59] client config for running-upgrade-161000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043f1800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:43:25.884719    4746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 10:43:25.887462    4746 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-161000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0917 10:43:25.887467    4746 kubeadm.go:1160] stopping kube-system containers ...
	I0917 10:43:25.887515    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:43:25.901030    4746 docker.go:483] Stopping containers: [0c902e7e6d4f caab415be964 6423b17eb0f9 6926756d5005 2e047c9d171f e5a17b2e1c2d e1db1a542d7a 780ad08d4d6c ecfe869a94eb 158c1345ab1b 40657e8cb054 04ddf08b182c d4872cd6d338 cb601f829e5b f52e14ef564e 089018bae94b 81c0039c75cb 112b6c2ceb36 aec0cde6f5ae 587cbc94c233]
	I0917 10:43:25.901111    4746 ssh_runner.go:195] Run: docker stop 0c902e7e6d4f caab415be964 6423b17eb0f9 6926756d5005 2e047c9d171f e5a17b2e1c2d e1db1a542d7a 780ad08d4d6c ecfe869a94eb 158c1345ab1b 40657e8cb054 04ddf08b182c d4872cd6d338 cb601f829e5b f52e14ef564e 089018bae94b 81c0039c75cb 112b6c2ceb36 aec0cde6f5ae 587cbc94c233
	I0917 10:43:25.918618    4746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 10:43:25.997962    4746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 10:43:26.002001    4746 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 17 17:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 17 17:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 17 17:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 17 17:43 /etc/kubernetes/scheduler.conf
	
	I0917 10:43:26.002042    4746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/admin.conf
	I0917 10:43:26.005460    4746 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:43:26.005490    4746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 10:43:26.008690    4746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/kubelet.conf
	I0917 10:43:26.011468    4746 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:43:26.011498    4746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 10:43:26.014621    4746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/controller-manager.conf
	I0917 10:43:26.017793    4746 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:43:26.017815    4746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 10:43:26.020821    4746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/scheduler.conf
	I0917 10:43:26.023377    4746 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:43:26.023398    4746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 10:43:26.026255    4746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 10:43:26.029414    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:43:26.074514    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:43:26.315223    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:43:26.507625    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:43:26.535252    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:43:26.556943    4746 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:43:26.557020    4746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:43:27.059316    4746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:43:27.559119    4746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:43:28.059089    4746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:43:28.063234    4746 api_server.go:72] duration metric: took 1.506338708s to wait for apiserver process to appear ...
	I0917 10:43:28.063244    4746 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:43:28.063256    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:43:33.065216    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:43:33.065281    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:43:38.065530    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:43:38.065612    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:43:43.066425    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:43:43.066467    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:43:48.067089    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:43:48.067139    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:43:53.068160    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:43:53.068251    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:43:58.069847    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:43:58.069937    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:44:03.071957    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:44:03.072033    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:44:08.074335    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:44:08.074436    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:44:13.077124    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:44:13.077188    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:44:18.079573    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:44:18.079673    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:44:23.082286    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:44:23.082368    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:44:28.084889    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:44:28.085401    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:44:28.120592    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:44:28.120756    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:44:28.141307    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:44:28.141431    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:44:28.155856    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:44:28.155949    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:44:28.168426    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:44:28.168503    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:44:28.181752    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:44:28.181825    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:44:28.192521    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:44:28.192599    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:44:28.202956    4746 logs.go:276] 0 containers: []
	W0917 10:44:28.202967    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:44:28.203036    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:44:28.213242    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:44:28.213258    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:44:28.213263    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:44:28.238047    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:44:28.238057    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:44:28.278794    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:44:28.278801    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:44:28.295273    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:44:28.295284    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:44:28.306587    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:44:28.306598    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:44:28.317949    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:44:28.317960    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:44:28.337575    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:44:28.337586    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:44:28.351188    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:44:28.351198    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:44:28.363079    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:44:28.363091    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:44:28.432146    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:44:28.432159    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:44:28.446129    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:44:28.446142    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:44:28.460697    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:44:28.460710    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:44:28.471838    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:44:28.471849    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:44:28.486696    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:44:28.486706    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:44:28.507823    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:44:28.507832    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:44:28.519627    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:44:28.519637    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:44:28.524364    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:44:28.524372    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:44:31.040620    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:44:36.042627    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:44:36.043171    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:44:36.084521    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:44:36.084707    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:44:36.106801    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:44:36.106932    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:44:36.121532    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:44:36.121615    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:44:36.133794    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:44:36.133876    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:44:36.144363    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:44:36.144444    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:44:36.155028    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:44:36.155107    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:44:36.165527    4746 logs.go:276] 0 containers: []
	W0917 10:44:36.165537    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:44:36.165598    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:44:36.176345    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:44:36.176363    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:44:36.176368    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:44:36.190921    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:44:36.190929    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:44:36.204962    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:44:36.204970    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:44:36.216529    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:44:36.216537    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:44:36.257689    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:44:36.257700    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:44:36.271942    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:44:36.271953    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:44:36.288409    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:44:36.288420    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:44:36.302078    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:44:36.302089    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:44:36.312795    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:44:36.312810    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:44:36.323881    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:44:36.323890    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:44:36.349529    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:44:36.349540    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:44:36.353653    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:44:36.353662    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:44:36.367324    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:44:36.367334    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:44:36.379428    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:44:36.379457    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:44:36.391379    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:44:36.391389    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:44:36.433835    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:44:36.433850    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:44:36.445574    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:44:36.445587    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:44:38.964900    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:44:43.967597    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:44:43.968116    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:44:44.006806    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:44:44.006976    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:44:44.028743    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:44:44.028870    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:44:44.043915    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:44:44.043999    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:44:44.057062    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:44:44.057151    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:44:44.067740    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:44:44.067829    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:44:44.078709    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:44:44.078796    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:44:44.088950    4746 logs.go:276] 0 containers: []
	W0917 10:44:44.088962    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:44:44.089038    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:44:44.099353    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:44:44.099372    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:44:44.099377    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:44:44.117314    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:44:44.117327    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:44:44.134190    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:44:44.134200    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:44:44.156337    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:44:44.156347    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:44:44.183019    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:44:44.183026    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:44:44.226698    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:44:44.226704    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:44:44.238623    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:44:44.238636    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:44:44.251370    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:44:44.251387    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:44:44.263510    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:44:44.263520    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:44:44.275465    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:44:44.275476    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:44:44.279945    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:44:44.279951    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:44:44.315476    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:44:44.315487    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:44:44.329056    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:44:44.329066    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:44:44.341417    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:44:44.341429    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:44:44.353644    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:44:44.353662    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:44:44.365138    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:44:44.365149    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:44:44.379024    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:44:44.379032    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:44:46.893409    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:44:51.895725    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:44:51.895888    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:44:51.912231    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:44:51.912319    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:44:51.925679    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:44:51.925763    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:44:51.936146    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:44:51.936228    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:44:51.947217    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:44:51.947305    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:44:51.957386    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:44:51.957468    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:44:51.973393    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:44:51.973471    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:44:51.985361    4746 logs.go:276] 0 containers: []
	W0917 10:44:51.985371    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:44:51.985433    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:44:51.996051    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:44:51.996071    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:44:51.996076    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:44:52.010082    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:44:52.010092    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:44:52.022452    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:44:52.022462    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:44:52.033451    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:44:52.033467    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:44:52.044614    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:44:52.044622    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:44:52.056881    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:44:52.056889    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:44:52.098992    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:44:52.099002    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:44:52.103437    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:44:52.103446    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:44:52.114880    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:44:52.114892    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:44:52.137506    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:44:52.137515    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:44:52.163181    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:44:52.163188    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:44:52.176728    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:44:52.176738    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:44:52.189498    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:44:52.189510    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:44:52.206071    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:44:52.206082    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:44:52.219147    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:44:52.219158    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:44:52.234380    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:44:52.234392    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:44:52.269487    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:44:52.269500    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:44:54.783269    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:44:59.785808    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:44:59.786404    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:44:59.826093    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:44:59.826257    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:44:59.847207    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:44:59.847341    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:44:59.862192    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:44:59.862289    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:44:59.874694    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:44:59.874783    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:44:59.889237    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:44:59.889323    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:44:59.900058    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:44:59.900135    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:44:59.910575    4746 logs.go:276] 0 containers: []
	W0917 10:44:59.910587    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:44:59.910660    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:44:59.920937    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:44:59.920956    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:44:59.920962    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:44:59.934615    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:44:59.934625    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:44:59.950979    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:44:59.950989    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:44:59.962947    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:44:59.962957    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:44:59.980242    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:44:59.980252    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:44:59.992158    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:44:59.992168    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:45:00.027680    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:45:00.027692    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:45:00.041563    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:45:00.041573    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:45:00.053072    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:45:00.053081    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:45:00.057331    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:45:00.057340    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:45:00.068820    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:45:00.068830    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:45:00.079586    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:45:00.079595    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:45:00.090367    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:45:00.090377    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:45:00.106196    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:45:00.106204    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:45:00.149491    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:45:00.149500    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:45:00.162149    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:45:00.162160    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:45:00.173058    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:45:00.173068    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:45:02.699412    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:45:07.702041    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:45:07.702538    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:45:07.736872    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:45:07.737028    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:45:07.756850    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:45:07.756953    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:45:07.771495    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:45:07.771582    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:45:07.784059    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:45:07.784150    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:45:07.795094    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:45:07.795162    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:45:07.807465    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:45:07.807545    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:45:07.818334    4746 logs.go:276] 0 containers: []
	W0917 10:45:07.818345    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:45:07.818416    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:45:07.829150    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:45:07.829168    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:45:07.829173    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:45:07.841019    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:45:07.841032    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:45:07.852701    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:45:07.852712    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:45:07.865310    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:45:07.865321    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:45:07.879128    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:45:07.879139    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:45:07.890397    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:45:07.890409    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:45:07.907895    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:45:07.907904    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:45:07.924726    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:45:07.924738    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:45:07.971315    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:45:07.971325    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:45:08.010407    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:45:08.010421    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:45:08.022215    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:45:08.022228    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:45:08.036489    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:45:08.036501    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:45:08.048178    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:45:08.048190    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:45:08.060126    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:45:08.060134    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:45:08.084042    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:45:08.084049    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:45:08.088361    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:45:08.088368    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:45:08.102560    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:45:08.102571    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:45:10.615282    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:45:15.617936    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:45:15.618511    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:45:15.658362    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:45:15.658538    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:45:15.680064    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:45:15.680195    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:45:15.695378    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:45:15.695468    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:45:15.708088    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:45:15.708172    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:45:15.719291    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:45:15.719373    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:45:15.730060    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:45:15.730142    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:45:15.739913    4746 logs.go:276] 0 containers: []
	W0917 10:45:15.739925    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:45:15.739985    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:45:15.750378    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:45:15.750397    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:45:15.750402    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:45:15.766369    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:45:15.766378    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:45:15.787782    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:45:15.787791    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:45:15.799220    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:45:15.799231    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:45:15.833980    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:45:15.833996    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:45:15.848126    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:45:15.848135    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:45:15.867432    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:45:15.867442    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:45:15.879807    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:45:15.879817    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:45:15.891091    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:45:15.891102    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:45:15.917112    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:45:15.917126    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:45:15.931708    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:45:15.931722    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:45:15.945365    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:45:15.945381    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:45:15.990296    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:45:15.990309    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:45:15.994867    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:45:15.994874    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:45:16.008433    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:45:16.008443    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:45:16.022420    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:45:16.022433    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:45:16.035362    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:45:16.035373    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:45:18.548708    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:45:23.551386    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:45:23.551936    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:45:23.591245    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:45:23.591409    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:45:23.612564    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:45:23.612666    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:45:23.627301    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:45:23.627393    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:45:23.640254    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:45:23.640338    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:45:23.654361    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:45:23.654440    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:45:23.665392    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:45:23.665472    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:45:23.681117    4746 logs.go:276] 0 containers: []
	W0917 10:45:23.681129    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:45:23.681201    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:45:23.691513    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:45:23.691535    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:45:23.691540    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:45:23.702780    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:45:23.702791    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:45:23.707805    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:45:23.707814    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:45:23.723142    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:45:23.723152    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:45:23.734383    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:45:23.734393    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:45:23.751425    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:45:23.751438    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:45:23.762730    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:45:23.762742    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:45:23.778669    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:45:23.778678    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:45:23.792881    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:45:23.792891    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:45:23.805239    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:45:23.805248    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:45:23.820252    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:45:23.820259    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:45:23.832228    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:45:23.832238    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:45:23.856071    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:45:23.856079    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:45:23.897768    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:45:23.897788    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:45:23.932859    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:45:23.932874    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:45:23.949125    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:45:23.949136    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:45:23.960626    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:45:23.960638    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:45:26.474398    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:45:31.475559    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:45:31.476134    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:45:31.518543    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:45:31.518743    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:45:31.539426    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:45:31.539549    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:45:31.554574    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:45:31.554664    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:45:31.566814    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:45:31.566894    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:45:31.577424    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:45:31.577497    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:45:31.587768    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:45:31.587843    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:45:31.597818    4746 logs.go:276] 0 containers: []
	W0917 10:45:31.597830    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:45:31.597896    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:45:31.608217    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:45:31.608235    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:45:31.608240    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:45:31.620277    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:45:31.620290    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:45:31.638985    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:45:31.638995    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:45:31.649917    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:45:31.649930    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:45:31.675587    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:45:31.675597    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:45:31.716235    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:45:31.716249    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:45:31.748806    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:45:31.748822    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:45:31.760953    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:45:31.760964    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:45:31.772844    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:45:31.772854    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:45:31.784699    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:45:31.784710    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:45:31.801623    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:45:31.801632    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:45:31.818667    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:45:31.818676    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:45:31.832386    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:45:31.832396    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:45:31.846161    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:45:31.846170    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:45:31.889482    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:45:31.889492    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:45:31.893647    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:45:31.893654    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:45:31.907337    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:45:31.907349    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:45:34.420393    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:45:39.422600    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:45:39.423261    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:45:39.462389    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:45:39.462552    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:45:39.484175    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:45:39.484318    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:45:39.499572    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:45:39.499671    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:45:39.512007    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:45:39.512092    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:45:39.522828    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:45:39.522898    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:45:39.533941    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:45:39.534027    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:45:39.544484    4746 logs.go:276] 0 containers: []
	W0917 10:45:39.544500    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:45:39.544575    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:45:39.554910    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:45:39.554926    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:45:39.554930    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:45:39.576023    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:45:39.576032    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:45:39.587099    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:45:39.587111    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:45:39.598523    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:45:39.598534    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:45:39.612390    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:45:39.612405    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:45:39.626656    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:45:39.626668    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:45:39.638299    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:45:39.638310    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:45:39.651964    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:45:39.651972    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:45:39.676710    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:45:39.676716    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:45:39.717185    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:45:39.717193    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:45:39.721265    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:45:39.721274    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:45:39.734856    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:45:39.734868    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:45:39.752461    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:45:39.752471    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:45:39.763934    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:45:39.763945    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:45:39.775589    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:45:39.775601    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:45:39.811567    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:45:39.811580    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:45:39.824229    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:45:39.824242    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:45:42.337340    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:45:47.338084    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:45:47.338392    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:45:47.349338    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:45:47.349425    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:45:47.360029    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:45:47.360110    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:45:47.374812    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:45:47.374897    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:45:47.387095    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:45:47.387181    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:45:47.398480    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:45:47.398566    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:45:47.410101    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:45:47.410184    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:45:47.420412    4746 logs.go:276] 0 containers: []
	W0917 10:45:47.420424    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:45:47.420494    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:45:47.445589    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:45:47.445608    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:45:47.445615    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:45:47.490667    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:45:47.490685    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:45:47.508772    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:45:47.508789    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:45:47.527633    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:45:47.527644    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:45:47.539071    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:45:47.539083    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:45:47.564500    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:45:47.564516    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:45:47.577007    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:45:47.577022    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:45:47.591488    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:45:47.591504    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:45:47.605248    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:45:47.605259    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:45:47.617006    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:45:47.617017    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:45:47.628765    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:45:47.628780    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:45:47.632993    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:45:47.633000    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:45:47.674547    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:45:47.674557    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:45:47.689153    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:45:47.689164    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:45:47.701767    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:45:47.701782    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:45:47.712882    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:45:47.712894    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:45:47.724789    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:45:47.724799    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:45:50.239128    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:45:55.241302    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:45:55.241877    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:45:55.288766    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:45:55.288947    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:45:55.313218    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:45:55.313318    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:45:55.330305    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:45:55.330386    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:45:55.343447    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:45:55.343563    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:45:55.355341    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:45:55.355439    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:45:55.367951    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:45:55.368037    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:45:55.378963    4746 logs.go:276] 0 containers: []
	W0917 10:45:55.378978    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:45:55.379047    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:45:55.390138    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:45:55.390156    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:45:55.390162    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:45:55.395130    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:45:55.395137    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:45:55.430361    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:45:55.430376    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:45:55.442891    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:45:55.442900    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:45:55.459940    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:45:55.459952    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:45:55.471995    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:45:55.472006    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:45:55.486224    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:45:55.486234    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:45:55.500145    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:45:55.500156    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:45:55.511192    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:45:55.511204    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:45:55.529354    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:45:55.529366    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:45:55.541267    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:45:55.541281    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:45:55.563007    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:45:55.563017    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:45:55.574654    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:45:55.574667    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:45:55.587620    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:45:55.587633    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:45:55.631745    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:45:55.631758    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:45:55.644035    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:45:55.644046    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:45:55.655431    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:45:55.655443    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:45:58.181440    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:46:03.183491    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:03.183631    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:03.194628    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:03.194714    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:03.205713    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:03.205805    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:03.221939    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:03.222021    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:03.234840    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:03.234923    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:03.245666    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:03.245743    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:03.256475    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:03.256550    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:03.266976    4746 logs.go:276] 0 containers: []
	W0917 10:46:03.266987    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:03.267050    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:03.278514    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:03.278532    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:03.278538    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:03.290955    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:03.290970    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:03.304322    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:03.304334    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:03.316927    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:03.316942    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:03.321248    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:03.321255    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:03.337732    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:03.337745    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:03.350714    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:03.350726    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:03.362959    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:03.362971    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:03.381085    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:03.381097    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:03.407113    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:03.407131    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:03.451651    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:03.451667    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:03.465451    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:03.465461    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:46:03.483246    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:03.483261    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:03.496289    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:03.496305    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:03.508916    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:03.508927    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:03.523278    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:03.523290    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:03.537439    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:03.537449    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:06.076564    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:46:11.079048    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:11.079261    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:11.095195    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:11.095285    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:11.106135    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:11.106222    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:11.117394    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:11.117478    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:11.130085    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:11.130168    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:11.140852    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:11.140929    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:11.151461    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:11.151549    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:11.162039    4746 logs.go:276] 0 containers: []
	W0917 10:46:11.162053    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:11.162127    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:11.173369    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:11.173390    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:11.173396    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:11.184485    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:11.184498    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:11.196257    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:11.196269    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:11.207686    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:11.207698    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:11.230912    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:11.230921    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:11.243905    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:11.243915    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:11.257445    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:11.257456    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:11.261749    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:11.261755    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:11.299072    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:11.299085    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:11.313670    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:11.313682    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:11.327419    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:11.327429    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:11.346594    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:11.346604    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:46:11.364865    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:11.364875    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:11.375833    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:11.375852    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:11.417047    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:11.417057    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:11.428217    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:11.428227    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:11.439995    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:11.440009    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:13.954876    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:46:18.957037    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:18.957644    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:18.998042    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:18.998192    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:19.017745    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:19.017840    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:19.032348    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:19.032439    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:19.044542    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:19.044635    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:19.055685    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:19.055764    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:19.066449    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:19.066529    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:19.076897    4746 logs.go:276] 0 containers: []
	W0917 10:46:19.076907    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:19.076970    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:19.087661    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:19.087682    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:19.087688    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:19.098725    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:19.098735    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:19.115267    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:19.115278    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:19.127068    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:19.127079    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:19.151321    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:19.151331    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:19.155684    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:19.155693    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:19.170791    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:19.170805    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:19.185457    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:19.185466    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:19.198728    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:19.198739    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:19.209784    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:19.209798    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:46:19.227782    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:19.227792    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:19.239239    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:19.239251    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:19.251005    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:19.251020    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:19.285763    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:19.285774    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:19.299335    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:19.299345    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:19.313023    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:19.313032    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:19.324528    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:19.324540    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:21.867163    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:46:26.868937    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:26.869154    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:26.885283    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:26.885382    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:26.904578    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:26.904663    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:26.914534    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:26.914617    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:26.925392    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:26.925465    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:26.935829    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:26.935908    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:26.952592    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:26.952668    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:26.962656    4746 logs.go:276] 0 containers: []
	W0917 10:46:26.962669    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:26.962737    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:26.980535    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:26.980552    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:26.980557    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:27.014607    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:27.014619    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:27.026171    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:27.026183    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:27.038376    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:27.038391    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:27.050423    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:27.050440    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:27.061716    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:27.061729    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:27.084054    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:27.084070    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:27.126399    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:27.126410    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:27.139549    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:27.139559    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:27.161213    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:27.161224    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:27.172536    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:27.172547    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:27.183295    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:27.183311    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:27.194923    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:27.194935    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:46:27.213094    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:27.213104    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:27.217661    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:27.217670    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:27.231157    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:27.231169    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:27.243165    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:27.243174    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:29.758987    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:46:34.761605    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:34.761723    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:34.772785    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:34.772869    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:34.784189    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:34.784278    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:34.794679    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:34.794756    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:34.805700    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:34.805782    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:34.816069    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:34.816143    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:34.826853    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:34.826926    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:34.837803    4746 logs.go:276] 0 containers: []
	W0917 10:46:34.837815    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:34.837885    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:34.848459    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:34.848484    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:34.848489    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:34.861623    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:34.861639    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:34.872926    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:34.872937    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:34.884646    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:34.884657    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:34.896551    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:34.896561    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:34.908936    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:34.908946    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:34.952765    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:34.952775    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:34.957600    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:34.957610    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:34.995850    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:34.995860    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:35.010507    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:35.010519    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:35.024367    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:35.024378    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:35.041410    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:35.041423    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:35.053695    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:35.053704    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:46:35.070805    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:35.070821    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:35.082471    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:35.082480    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:35.094559    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:35.094568    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:35.105760    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:35.105772    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:37.630544    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:46:42.632653    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:42.632900    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:42.650677    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:42.650791    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:42.664856    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:42.664947    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:42.676570    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:42.676651    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:42.687402    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:42.687489    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:42.698959    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:42.699051    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:42.712323    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:42.712403    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:42.725903    4746 logs.go:276] 0 containers: []
	W0917 10:46:42.725915    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:42.725986    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:42.739698    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:42.739715    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:42.739720    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:42.753112    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:42.753122    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:42.797876    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:42.797886    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:42.802819    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:42.802829    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:42.813805    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:42.813818    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:42.838552    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:42.838565    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:42.872209    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:42.872225    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:42.886351    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:42.886359    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:42.901666    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:42.901682    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:42.913146    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:42.913157    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:42.924647    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:42.924661    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:46:42.942023    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:42.942033    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:42.955082    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:42.955096    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:42.967535    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:42.967545    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:42.984791    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:42.984801    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:42.996098    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:42.996112    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:43.007583    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:43.007594    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:45.519290    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:46:50.521947    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:50.522078    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:50.536423    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:50.536517    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:50.560481    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:50.560577    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:50.572902    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:50.572989    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:50.592451    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:50.592549    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:50.604481    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:50.604571    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:50.616242    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:50.616348    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:50.632336    4746 logs.go:276] 0 containers: []
	W0917 10:46:50.632350    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:50.632435    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:50.646022    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:50.646040    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:50.646046    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:50.661409    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:50.661427    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:50.699688    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:50.699706    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:50.723618    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:50.723637    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:50.741926    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:50.741940    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:50.755968    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:50.755983    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:50.777909    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:50.777925    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:50.792166    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:50.792179    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:50.836173    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:50.836194    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:46:50.855409    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:50.855423    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:50.868424    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:50.868437    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:50.873320    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:50.873331    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:50.890632    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:50.890644    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:50.904126    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:50.904138    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:50.917304    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:50.917316    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:50.930189    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:50.930200    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:50.954713    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:50.954728    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:53.498093    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:46:58.500130    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:58.500274    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:58.512394    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:58.512473    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:58.523376    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:58.523460    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:58.534363    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:58.534445    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:58.545592    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:58.545680    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:58.556476    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:58.556563    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:58.567592    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:58.567677    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:58.578153    4746 logs.go:276] 0 containers: []
	W0917 10:46:58.578166    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:58.578244    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:58.589115    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:58.589134    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:58.589139    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:58.628138    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:58.628151    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:58.640506    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:58.640522    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:58.653248    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:58.653261    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:58.665858    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:58.665871    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:58.680947    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:58.680958    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:58.693256    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:58.693269    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:58.704880    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:58.704891    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:58.717060    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:58.717073    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:58.726225    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:58.726236    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:58.740934    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:58.740951    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:58.752635    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:58.752647    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:58.773535    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:58.773545    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:58.797740    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:58.797751    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:58.841978    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:58.841998    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:58.854615    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:58.854625    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:58.868151    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:58.868167    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:47:01.387236    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:06.389341    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:06.389611    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:47:06.410459    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:47:06.410574    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:47:06.426195    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:47:06.426311    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:47:06.438628    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:47:06.438711    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:47:06.449198    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:47:06.449277    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:47:06.459645    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:47:06.459717    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:47:06.470391    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:47:06.470472    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:47:06.480872    4746 logs.go:276] 0 containers: []
	W0917 10:47:06.480884    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:47:06.480954    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:47:06.491359    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:47:06.491376    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:47:06.491382    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:47:06.502618    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:47:06.502629    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:47:06.537127    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:47:06.537138    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:47:06.551192    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:47:06.551203    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:47:06.567466    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:47:06.567477    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:47:06.584362    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:47:06.584373    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:47:06.597496    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:47:06.597511    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:47:06.608940    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:47:06.608953    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:47:06.620356    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:47:06.620369    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:47:06.664400    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:47:06.664418    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:47:06.679962    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:47:06.679973    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:47:06.692452    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:47:06.692462    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:47:06.707563    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:47:06.707575    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:47:06.719726    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:47:06.719736    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:47:06.727829    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:47:06.727839    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:47:06.739979    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:47:06.739991    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:47:06.751306    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:47:06.751322    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:47:09.278093    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:14.278776    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:14.278906    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:47:14.291105    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:47:14.291184    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:47:14.302609    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:47:14.302695    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:47:14.313959    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:47:14.314046    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:47:14.325198    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:47:14.325280    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:47:14.336066    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:47:14.336156    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:47:14.349712    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:47:14.349798    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:47:14.364965    4746 logs.go:276] 0 containers: []
	W0917 10:47:14.364978    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:47:14.365055    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:47:14.375846    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:47:14.375862    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:47:14.375867    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:47:14.410109    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:47:14.410120    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:47:14.425039    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:47:14.425055    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:47:14.438331    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:47:14.438345    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:47:14.449730    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:47:14.449743    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:47:14.469289    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:47:14.469302    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:47:14.480925    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:47:14.480936    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:47:14.485387    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:47:14.485395    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:47:14.503163    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:47:14.503179    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:47:14.516284    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:47:14.516294    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:47:14.559348    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:47:14.559368    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:47:14.570315    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:47:14.570328    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:47:14.581587    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:47:14.581597    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:47:14.593281    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:47:14.593293    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:47:14.605883    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:47:14.605893    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:47:14.623592    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:47:14.623601    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:47:14.644976    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:47:14.644987    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:47:17.164929    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:22.167058    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:22.167307    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:47:22.193609    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:47:22.193727    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:47:22.209534    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:47:22.209629    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:47:22.222021    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:47:22.222116    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:47:22.233703    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:47:22.233784    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:47:22.244241    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:47:22.244312    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:47:22.254632    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:47:22.254713    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:47:22.265770    4746 logs.go:276] 0 containers: []
	W0917 10:47:22.265781    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:47:22.265845    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:47:22.276939    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:47:22.276957    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:47:22.276963    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:47:22.312935    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:47:22.312944    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:47:22.325045    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:47:22.325055    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:47:22.338701    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:47:22.338711    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:47:22.361006    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:47:22.361014    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:47:22.372866    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:47:22.372880    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:47:22.414412    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:47:22.414419    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:47:22.418766    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:47:22.418772    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:47:22.473715    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:47:22.473730    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:47:22.484964    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:47:22.484978    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:47:22.500958    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:47:22.500969    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:47:22.512424    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:47:22.512436    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:47:22.523773    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:47:22.523787    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:47:22.541319    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:47:22.541329    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:47:22.552954    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:47:22.552968    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:47:22.567848    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:47:22.567861    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:47:22.581360    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:47:22.581371    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:47:25.094963    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:30.095930    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:30.095981    4746 kubeadm.go:597] duration metric: took 4m4.224001291s to restartPrimaryControlPlane
	W0917 10:47:30.096016    4746 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 10:47:30.096034    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0917 10:47:31.123448    4746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.027434042s)
	I0917 10:47:31.123530    4746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:47:31.128585    4746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 10:47:31.131457    4746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 10:47:31.134000    4746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 10:47:31.134006    4746 kubeadm.go:157] found existing configuration files:
	
	I0917 10:47:31.134030    4746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/admin.conf
	I0917 10:47:31.137084    4746 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 10:47:31.137111    4746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 10:47:31.140249    4746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/kubelet.conf
	I0917 10:47:31.142732    4746 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 10:47:31.142758    4746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 10:47:31.145626    4746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/controller-manager.conf
	I0917 10:47:31.148500    4746 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 10:47:31.148524    4746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 10:47:31.151009    4746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/scheduler.conf
	I0917 10:47:31.153762    4746 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 10:47:31.153788    4746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 10:47:31.156821    4746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 10:47:31.175422    4746 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0917 10:47:31.175454    4746 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 10:47:31.222330    4746 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 10:47:31.222399    4746 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 10:47:31.222456    4746 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 10:47:31.275817    4746 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 10:47:31.280003    4746 out.go:235]   - Generating certificates and keys ...
	I0917 10:47:31.280041    4746 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 10:47:31.280075    4746 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 10:47:31.280116    4746 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 10:47:31.280152    4746 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 10:47:31.280188    4746 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 10:47:31.280218    4746 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 10:47:31.280253    4746 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 10:47:31.280288    4746 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 10:47:31.280331    4746 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 10:47:31.280375    4746 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 10:47:31.280400    4746 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 10:47:31.280436    4746 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 10:47:31.372090    4746 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 10:47:31.480128    4746 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 10:47:31.608937    4746 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 10:47:31.701806    4746 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 10:47:31.735700    4746 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 10:47:31.736097    4746 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 10:47:31.736186    4746 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 10:47:31.822442    4746 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 10:47:31.825354    4746 out.go:235]   - Booting up control plane ...
	I0917 10:47:31.825404    4746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 10:47:31.825442    4746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 10:47:31.825482    4746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 10:47:31.825553    4746 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 10:47:31.826449    4746 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 10:47:36.835766    4746 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.008977 seconds
	I0917 10:47:36.835866    4746 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 10:47:36.842283    4746 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 10:47:37.352059    4746 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 10:47:37.352154    4746 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-161000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 10:47:37.862038    4746 kubeadm.go:310] [bootstrap-token] Using token: il327p.updajlxgrwyov07z
	I0917 10:47:37.864824    4746 out.go:235]   - Configuring RBAC rules ...
	I0917 10:47:37.864919    4746 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 10:47:37.865873    4746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 10:47:37.873713    4746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 10:47:37.875254    4746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 10:47:37.876593    4746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 10:47:37.877860    4746 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 10:47:37.882973    4746 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 10:47:38.054906    4746 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 10:47:38.267520    4746 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 10:47:38.268018    4746 kubeadm.go:310] 
	I0917 10:47:38.268047    4746 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 10:47:38.268051    4746 kubeadm.go:310] 
	I0917 10:47:38.268087    4746 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 10:47:38.268091    4746 kubeadm.go:310] 
	I0917 10:47:38.268106    4746 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 10:47:38.268138    4746 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 10:47:38.268171    4746 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 10:47:38.268174    4746 kubeadm.go:310] 
	I0917 10:47:38.268202    4746 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 10:47:38.268209    4746 kubeadm.go:310] 
	I0917 10:47:38.268230    4746 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 10:47:38.268232    4746 kubeadm.go:310] 
	I0917 10:47:38.268255    4746 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 10:47:38.268302    4746 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 10:47:38.268377    4746 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 10:47:38.268383    4746 kubeadm.go:310] 
	I0917 10:47:38.268435    4746 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 10:47:38.268475    4746 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 10:47:38.268480    4746 kubeadm.go:310] 
	I0917 10:47:38.268520    4746 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token il327p.updajlxgrwyov07z \
	I0917 10:47:38.268595    4746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:36041a92e029834f33dc421547a4417b75c39ebfd82ce914924ecffa9817b69d \
	I0917 10:47:38.268609    4746 kubeadm.go:310] 	--control-plane 
	I0917 10:47:38.268613    4746 kubeadm.go:310] 
	I0917 10:47:38.268657    4746 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 10:47:38.268665    4746 kubeadm.go:310] 
	I0917 10:47:38.268700    4746 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token il327p.updajlxgrwyov07z \
	I0917 10:47:38.268751    4746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:36041a92e029834f33dc421547a4417b75c39ebfd82ce914924ecffa9817b69d 
	I0917 10:47:38.268832    4746 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 10:47:38.268840    4746 cni.go:84] Creating CNI manager for ""
	I0917 10:47:38.268849    4746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:47:38.270304    4746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 10:47:38.277068    4746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 10:47:38.280778    4746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 10:47:38.286262    4746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 10:47:38.286324    4746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 10:47:38.286332    4746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-161000 minikube.k8s.io/updated_at=2024_09_17T10_47_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=running-upgrade-161000 minikube.k8s.io/primary=true
	I0917 10:47:38.332797    4746 ops.go:34] apiserver oom_adj: -16
	I0917 10:47:38.332826    4746 kubeadm.go:1113] duration metric: took 46.557458ms to wait for elevateKubeSystemPrivileges
	I0917 10:47:38.332841    4746 kubeadm.go:394] duration metric: took 4m12.485961333s to StartCluster
	I0917 10:47:38.332851    4746 settings.go:142] acquiring lock: {Name:mk01dda79792b7eaa96d8ee72bfae59b39d5fab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:47:38.332937    4746 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:47:38.333352    4746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/kubeconfig: {Name:mk31f3a4e5ba5b55f1c245ae17bd3947ee606141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:47:38.333567    4746 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:47:38.333621    4746 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 10:47:38.333658    4746 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-161000"
	I0917 10:47:38.333668    4746 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-161000"
	W0917 10:47:38.333671    4746 addons.go:243] addon storage-provisioner should already be in state true
	I0917 10:47:38.333646    4746 config.go:182] Loaded profile config "running-upgrade-161000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:47:38.333686    4746 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-161000"
	I0917 10:47:38.333691    4746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-161000"
	I0917 10:47:38.333684    4746 host.go:66] Checking if "running-upgrade-161000" exists ...
	I0917 10:47:38.334574    4746 kapi.go:59] client config for running-upgrade-161000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043f1800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:47:38.334699    4746 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-161000"
	W0917 10:47:38.334704    4746 addons.go:243] addon default-storageclass should already be in state true
	I0917 10:47:38.334710    4746 host.go:66] Checking if "running-upgrade-161000" exists ...
	I0917 10:47:38.338022    4746 out.go:177] * Verifying Kubernetes components...
	I0917 10:47:38.338398    4746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 10:47:38.342149    4746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 10:47:38.342156    4746 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/running-upgrade-161000/id_rsa Username:docker}
	I0917 10:47:38.344933    4746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:47:38.348996    4746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:47:38.353063    4746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 10:47:38.353070    4746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 10:47:38.353076    4746 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/running-upgrade-161000/id_rsa Username:docker}
	I0917 10:47:38.438559    4746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:47:38.445498    4746 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:47:38.445557    4746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:47:38.449691    4746 api_server.go:72] duration metric: took 116.117167ms to wait for apiserver process to appear ...
	I0917 10:47:38.449699    4746 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:47:38.449706    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:38.460265    4746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 10:47:38.530347    4746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 10:47:38.803518    4746 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 10:47:38.803533    4746 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 10:47:43.451753    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:43.451853    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:48.452487    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:48.452508    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:53.452831    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:53.452856    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:58.453572    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:58.453608    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:03.454337    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:03.454404    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:08.455426    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:08.455462    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0917 10:48:08.804914    4746 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0917 10:48:08.809270    4746 out.go:177] * Enabled addons: storage-provisioner
	I0917 10:48:08.817114    4746 addons.go:510] duration metric: took 30.484433542s for enable addons: enabled=[storage-provisioner]
	I0917 10:48:13.456731    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:13.456779    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:18.458477    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:18.458509    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:23.460619    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:23.460648    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:28.462727    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:28.462746    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:33.464768    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:33.464798    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:38.465007    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:38.465138    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:38.475963    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:48:38.476046    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:38.486418    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:48:38.486494    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:38.496992    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:48:38.497083    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:38.507532    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:48:38.507605    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:38.518120    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:48:38.518195    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:38.528580    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:48:38.528664    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:38.538805    4746 logs.go:276] 0 containers: []
	W0917 10:48:38.538817    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:38.538893    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:38.550686    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:48:38.550699    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:38.550704    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:38.555551    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:38.555559    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:38.592167    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:48:38.592182    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:48:38.603861    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:48:38.603871    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:48:38.620193    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:38.620204    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:38.643620    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:48:38.643631    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:38.654544    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:38.654555    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:38.687684    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:48:38.687693    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:48:38.701378    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:48:38.701389    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:48:38.713323    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:48:38.713338    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:48:38.731246    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:48:38.731257    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:48:38.749387    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:48:38.749397    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:48:38.761834    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:48:38.761847    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:48:41.278072    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:46.280521    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:46.280759    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:46.305631    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:48:46.305752    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:46.320890    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:48:46.320988    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:46.334828    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:48:46.334911    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:46.345833    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:48:46.345924    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:46.357344    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:48:46.357425    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:46.368082    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:48:46.368169    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:46.378599    4746 logs.go:276] 0 containers: []
	W0917 10:48:46.378612    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:46.378686    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:46.388327    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:48:46.388348    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:48:46.388354    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:48:46.399607    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:48:46.399622    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:48:46.412171    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:48:46.412182    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:48:46.430045    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:48:46.430054    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:48:46.441730    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:46.441740    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:46.477965    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:46.477979    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:46.511968    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:48:46.511978    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:48:46.527749    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:48:46.527758    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:48:46.540906    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:46.540920    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:46.564743    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:48:46.564752    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:46.576702    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:46.576712    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:46.581495    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:48:46.581502    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:48:46.595264    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:48:46.595276    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:48:49.111493    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:54.112146    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:54.112594    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:54.143604    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:48:54.143761    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:54.166298    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:48:54.166403    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:54.181953    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:48:54.182040    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:54.193108    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:48:54.193178    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:54.203759    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:48:54.203847    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:54.215252    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:48:54.215339    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:54.226074    4746 logs.go:276] 0 containers: []
	W0917 10:48:54.226087    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:54.226164    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:54.237021    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:48:54.237035    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:48:54.237042    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:48:54.250708    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:48:54.250718    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:48:54.263029    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:48:54.263040    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:48:54.281000    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:54.281013    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:54.316810    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:48:54.316824    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:48:54.331168    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:48:54.331182    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:48:54.345810    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:48:54.345822    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:48:54.357489    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:48:54.357501    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:48:54.376873    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:48:54.376887    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:48:54.390764    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:54.390775    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:54.414902    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:48:54.414912    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:54.426760    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:54.426773    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:54.461890    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:54.461899    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:56.967767    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:01.970006    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:01.970278    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:01.998727    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:01.998872    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:02.018671    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:02.018772    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:02.033269    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:49:02.033357    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:02.045142    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:02.045227    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:02.055879    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:02.055951    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:02.066384    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:02.066456    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:02.080112    4746 logs.go:276] 0 containers: []
	W0917 10:49:02.080125    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:02.080195    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:02.090452    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:02.090475    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:02.090480    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:02.102365    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:02.102376    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:02.116512    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:02.116521    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:02.129252    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:02.129262    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:02.140917    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:02.140928    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:02.161205    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:02.161218    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:02.176472    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:02.176484    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:02.190460    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:02.190473    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:02.215730    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:02.215750    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:02.227632    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:02.227647    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:02.263149    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:02.263158    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:02.267477    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:02.267485    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:02.306884    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:02.306895    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:04.823706    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:09.825927    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:09.826106    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:09.838775    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:09.838873    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:09.850028    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:09.850115    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:09.860325    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:49:09.860410    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:09.870265    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:09.870349    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:09.880515    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:09.880594    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:09.891185    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:09.891283    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:09.901479    4746 logs.go:276] 0 containers: []
	W0917 10:49:09.901491    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:09.901557    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:09.912258    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:09.912275    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:09.912280    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:09.923650    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:09.923660    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:09.939980    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:09.939989    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:09.956894    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:09.956910    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:09.968266    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:09.968277    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:09.992822    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:09.992832    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:09.997798    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:09.997806    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:10.032409    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:10.032420    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:10.044466    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:10.044477    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:10.056190    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:10.056206    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:10.068266    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:10.068277    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:10.101842    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:10.101855    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:10.122530    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:10.122540    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:12.638065    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:17.639562    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:17.639804    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:17.663949    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:17.664049    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:17.677747    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:17.677834    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:17.688904    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:49:17.688978    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:17.703739    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:17.703825    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:17.714374    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:17.714453    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:17.725618    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:17.725701    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:17.736291    4746 logs.go:276] 0 containers: []
	W0917 10:49:17.736303    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:17.736369    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:17.746672    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:17.746685    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:17.746690    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:17.780759    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:17.780772    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:17.795174    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:17.795184    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:17.806585    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:17.806596    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:17.819256    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:17.819268    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:17.837758    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:17.837769    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:17.849112    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:17.849121    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:17.866513    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:17.866524    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:17.901324    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:17.901332    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:17.906483    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:17.906489    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:17.920741    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:17.920752    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:17.935201    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:17.935212    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:17.946520    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:17.946530    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:20.473211    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:25.475392    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:25.475642    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:25.498322    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:25.498452    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:25.514573    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:25.514673    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:25.527435    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:49:25.527525    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:25.539120    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:25.539207    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:25.549632    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:25.549715    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:25.560135    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:25.560221    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:25.569882    4746 logs.go:276] 0 containers: []
	W0917 10:49:25.569896    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:25.569960    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:25.580746    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:25.580765    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:25.580770    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:25.603427    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:25.603437    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:25.619408    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:25.619419    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:25.633807    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:25.633818    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:25.645570    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:25.645582    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:25.671136    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:25.671151    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:25.682718    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:25.682727    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:25.718065    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:25.718076    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:25.722793    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:25.722803    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:25.762919    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:25.762930    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:25.777559    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:25.777574    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:25.788854    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:25.788864    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:25.800412    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:25.800425    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:28.320572    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:33.322647    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:33.322785    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:33.337319    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:33.337410    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:33.351875    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:33.351953    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:33.363487    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:49:33.363576    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:33.373967    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:33.374038    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:33.387801    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:33.387887    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:33.397981    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:33.398052    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:33.407927    4746 logs.go:276] 0 containers: []
	W0917 10:49:33.407940    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:33.408014    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:33.418401    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:33.418415    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:33.418421    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:33.453407    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:33.453419    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:33.467633    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:33.467643    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:33.481771    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:33.481781    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:33.493077    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:33.493087    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:33.504433    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:33.504443    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:33.518944    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:33.518954    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:33.531570    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:33.531581    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:33.542983    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:33.542994    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:33.547669    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:33.547676    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:33.581852    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:33.581862    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:33.599271    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:33.599280    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:33.622477    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:33.622484    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:36.135637    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:41.137764    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:41.137911    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:41.156124    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:41.156223    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:41.167484    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:41.167588    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:41.178025    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:49:41.178109    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:41.188458    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:41.188534    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:41.199227    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:41.199312    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:41.210998    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:41.211080    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:41.221611    4746 logs.go:276] 0 containers: []
	W0917 10:49:41.221629    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:41.221693    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:41.232543    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:41.232561    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:41.232566    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:41.268260    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:41.268271    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:41.282352    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:49:41.282363    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:49:41.293448    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:41.293461    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:41.307336    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:41.307351    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:41.332029    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:49:41.332040    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:49:41.343572    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:41.343584    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:41.355092    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:41.355102    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:41.367046    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:41.367061    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:41.379231    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:41.379242    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:41.414430    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:41.414437    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:41.428262    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:41.428271    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:41.432901    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:41.432909    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:41.447650    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:41.447664    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:41.459919    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:41.459928    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:43.979244    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:48.980925    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:48.981201    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:49.004992    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:49.005129    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:49.021585    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:49.021684    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:49.034899    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:49:49.034986    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:49.045743    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:49.045823    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:49.063262    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:49.063342    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:49.073815    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:49.073898    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:49.083527    4746 logs.go:276] 0 containers: []
	W0917 10:49:49.083541    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:49.083611    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:49.093853    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:49.093872    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:49.093880    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:49.109026    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:49:49.109040    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:49:49.120700    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:49.120714    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:49.132533    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:49.132543    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:49.144075    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:49.144085    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:49.158085    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:49.158100    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:49.169825    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:49.169834    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:49.174498    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:49.174504    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:49.211769    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:49.211781    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:49.226557    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:49.226567    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:49.247338    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:49.247348    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:49.258732    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:49:49.258744    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:49:49.276379    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:49.276388    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:49.288238    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:49.288251    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:49.324058    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:49.324070    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:51.849918    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:56.852007    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:56.852106    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:56.863389    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:56.863473    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:56.874236    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:56.874319    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:56.886100    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:49:56.886187    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:56.898406    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:56.898486    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:56.909096    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:56.909180    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:56.920378    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:56.920460    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:56.930933    4746 logs.go:276] 0 containers: []
	W0917 10:49:56.930943    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:56.931005    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:56.941306    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:56.941325    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:56.941330    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:56.959272    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:49:56.959287    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:49:56.971641    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:56.971671    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:56.983030    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:56.983042    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:57.001046    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:57.001058    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:57.012838    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:57.012851    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:57.032181    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:57.032192    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:57.046529    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:57.046543    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:57.081562    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:57.081571    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:57.086340    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:49:57.086346    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:49:57.101773    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:57.101785    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:57.114175    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:57.114186    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:57.148715    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:57.148726    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:57.160409    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:57.160422    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:57.184303    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:57.184309    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:59.698257    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:04.699239    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:04.699455    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:04.720624    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:04.720751    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:04.736375    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:04.736466    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:04.749098    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:04.749191    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:04.760954    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:04.761029    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:04.776203    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:04.776278    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:04.786707    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:04.786776    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:04.796823    4746 logs.go:276] 0 containers: []
	W0917 10:50:04.796836    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:04.796908    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:04.808684    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:04.808703    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:04.808708    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:04.820498    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:04.820510    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:04.854360    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:04.854367    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:04.897071    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:04.897081    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:04.912178    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:04.912189    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:04.929803    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:04.929816    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:04.941464    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:04.941477    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:04.966693    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:04.966701    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:04.979650    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:04.979664    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:04.991137    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:04.991150    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:05.017085    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:05.017098    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:05.031296    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:05.031310    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:05.042813    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:05.042827    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:05.054259    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:05.054272    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:05.066170    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:05.066181    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:07.572580    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:12.572970    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:12.573201    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:12.591030    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:12.591136    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:12.603528    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:12.603620    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:12.623249    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:12.623337    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:12.633712    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:12.633789    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:12.644029    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:12.644114    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:12.654713    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:12.654786    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:12.664976    4746 logs.go:276] 0 containers: []
	W0917 10:50:12.664988    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:12.665058    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:12.675225    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:12.675242    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:12.675247    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:12.689248    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:12.689263    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:12.708437    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:12.708451    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:12.725724    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:12.725736    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:12.740916    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:12.740927    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:12.757914    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:12.757925    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:12.770688    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:12.770699    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:12.781830    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:12.781840    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:12.827469    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:12.827483    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:12.839500    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:12.839510    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:12.851930    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:12.851945    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:12.869522    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:12.869540    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:12.904807    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:12.904816    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:12.909292    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:12.909299    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:12.921155    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:12.921168    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:15.445182    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:20.445524    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:20.445703    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:20.461101    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:20.461198    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:20.473666    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:20.473750    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:20.487330    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:20.487418    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:20.497613    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:20.497685    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:20.508385    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:20.508465    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:20.523459    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:20.523534    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:20.534195    4746 logs.go:276] 0 containers: []
	W0917 10:50:20.534206    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:20.534274    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:20.544545    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:20.544564    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:20.544571    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:20.558389    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:20.558402    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:20.579350    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:20.579361    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:20.590700    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:20.590713    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:20.608389    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:20.608398    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:20.620015    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:20.620030    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:20.645526    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:20.645533    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:20.657262    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:20.657272    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:20.691734    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:20.691749    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:20.703686    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:20.703697    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:20.716359    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:20.716369    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:20.728145    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:20.728154    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:20.745692    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:20.745703    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:20.781554    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:20.781569    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:20.786126    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:20.786133    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:23.303250    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:28.305380    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:28.305575    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:28.320695    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:28.320792    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:28.336527    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:28.336613    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:28.347185    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:28.347266    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:28.361328    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:28.361409    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:28.372030    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:28.372116    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:28.382982    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:28.383060    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:28.393478    4746 logs.go:276] 0 containers: []
	W0917 10:50:28.393492    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:28.393560    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:28.403991    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:28.404009    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:28.404017    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:28.418328    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:28.418338    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:28.430405    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:28.430422    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:28.441805    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:28.441816    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:28.456076    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:28.456087    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:28.471165    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:28.471181    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:28.475431    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:28.475440    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:28.489646    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:28.489657    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:28.514884    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:28.514892    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:28.548894    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:28.548902    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:28.592969    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:28.592978    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:28.610965    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:28.610975    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:28.623555    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:28.623566    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:28.635376    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:28.635386    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:28.647305    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:28.647314    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:31.161353    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:36.163439    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:36.163537    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:36.174659    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:36.174750    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:36.184866    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:36.184940    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:36.195169    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:36.195256    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:36.206390    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:36.206469    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:36.216999    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:36.217083    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:36.227804    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:36.227889    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:36.238203    4746 logs.go:276] 0 containers: []
	W0917 10:50:36.238212    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:36.238280    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:36.250202    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:36.250220    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:36.250225    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:36.285988    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:36.285999    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:36.301937    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:36.301952    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:36.314243    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:36.314254    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:36.332365    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:36.332377    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:36.347669    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:36.347684    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:36.359896    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:36.359910    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:36.395252    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:36.395260    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:36.399814    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:36.399820    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:36.413991    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:36.414006    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:36.426097    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:36.426107    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:36.438265    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:36.438274    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:36.450117    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:36.450129    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:36.462269    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:36.462284    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:36.477126    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:36.477136    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:39.003818    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:44.006327    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:44.006574    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:44.025103    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:44.025214    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:44.039794    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:44.039889    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:44.052620    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:44.052709    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:44.063617    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:44.063697    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:44.074711    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:44.074794    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:44.084892    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:44.084971    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:44.095205    4746 logs.go:276] 0 containers: []
	W0917 10:50:44.095215    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:44.095280    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:44.107441    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:44.107458    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:44.107464    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:44.122217    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:44.122234    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:44.135699    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:44.135711    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:44.149044    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:44.149055    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:44.164293    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:44.164308    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:44.176127    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:44.176138    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:44.187982    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:44.187992    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:44.213866    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:44.213874    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:44.248580    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:44.248591    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:44.254075    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:44.254088    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:44.288832    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:44.288843    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:44.302809    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:44.302821    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:44.317137    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:44.317147    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:44.333222    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:44.333235    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:44.350507    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:44.350517    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:46.864345    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:51.866816    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:51.867036    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:51.884706    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:51.884808    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:51.897562    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:51.897650    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:51.908942    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:51.909024    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:51.920437    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:51.920522    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:51.930695    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:51.930769    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:51.941813    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:51.941892    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:51.952121    4746 logs.go:276] 0 containers: []
	W0917 10:50:51.952134    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:51.952202    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:51.962737    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:51.962756    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:51.962761    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:51.977819    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:51.977831    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:51.989759    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:51.989769    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:52.014648    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:52.014655    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:52.026241    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:52.026252    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:52.030746    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:52.030755    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:52.044307    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:52.044318    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:52.059657    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:52.059668    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:52.071164    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:52.071174    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:52.105522    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:52.105540    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:52.118237    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:52.118247    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:52.135666    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:52.135677    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:52.170172    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:52.170190    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:52.189518    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:52.189529    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:52.203934    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:52.203944    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:54.727372    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:59.729469    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:59.729578    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:59.741139    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:59.741230    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:59.751824    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:59.751907    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:59.764120    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:59.764193    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:59.775108    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:59.775194    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:59.785955    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:59.786039    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:59.798331    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:59.798412    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:59.809475    4746 logs.go:276] 0 containers: []
	W0917 10:50:59.809488    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:59.809561    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:59.821266    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:59.821284    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:59.821290    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:59.858396    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:59.858409    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:59.870067    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:59.870081    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:59.888515    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:59.888526    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:59.893183    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:59.893191    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:59.930666    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:59.930678    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:59.946763    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:59.946773    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:59.958347    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:59.958363    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:59.970193    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:59.970203    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:59.995044    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:59.995051    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:51:00.009829    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:51:00.009843    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:51:00.023489    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:51:00.023503    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:51:00.038386    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:51:00.038396    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:51:00.050153    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:51:00.050163    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:51:00.062308    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:51:00.062318    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:51:02.576499    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:07.578603    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:07.578790    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:51:07.593555    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:51:07.593641    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:51:07.606361    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:51:07.606446    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:51:07.618156    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:51:07.618245    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:51:07.628393    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:51:07.628475    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:51:07.638366    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:51:07.638447    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:51:07.648988    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:51:07.649073    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:51:07.659270    4746 logs.go:276] 0 containers: []
	W0917 10:51:07.659283    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:51:07.659353    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:51:07.670339    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:51:07.670356    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:51:07.670362    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:51:07.704744    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:51:07.704759    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:51:07.709444    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:51:07.709453    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:51:07.723978    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:51:07.723994    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:51:07.735660    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:51:07.735672    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:51:07.748543    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:51:07.748559    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:51:07.766581    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:51:07.766595    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:51:07.784080    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:51:07.784090    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:51:07.798691    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:51:07.798702    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:51:07.811022    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:51:07.811037    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:51:07.834520    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:51:07.834529    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:51:07.869586    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:51:07.869600    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:51:07.881343    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:51:07.881354    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:51:07.893282    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:51:07.893294    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:51:07.905122    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:51:07.905132    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:51:10.419206    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:15.421337    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:15.421461    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:51:15.432418    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:51:15.432488    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:51:15.442605    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:51:15.442690    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:51:15.453007    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:51:15.453088    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:51:15.463653    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:51:15.463731    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:51:15.474375    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:51:15.474460    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:51:15.484807    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:51:15.484887    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:51:15.495449    4746 logs.go:276] 0 containers: []
	W0917 10:51:15.495459    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:51:15.495527    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:51:15.505761    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:51:15.505779    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:51:15.505786    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:51:15.529927    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:51:15.529938    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:51:15.541159    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:51:15.541169    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:51:15.555536    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:51:15.555551    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:51:15.567461    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:51:15.567473    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:51:15.580886    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:51:15.580898    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:51:15.592801    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:51:15.592813    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:51:15.607870    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:51:15.607881    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:51:15.619664    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:51:15.619679    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:51:15.631580    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:51:15.631595    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:51:15.642987    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:51:15.643002    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:51:15.660610    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:51:15.660620    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:51:15.672595    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:51:15.672609    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:51:15.706651    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:51:15.706665    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:51:15.711724    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:51:15.711740    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:51:18.248610    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:23.250610    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:23.250809    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:51:23.262050    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:51:23.262128    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:51:23.272819    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:51:23.272898    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:51:23.283752    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:51:23.283835    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:51:23.294541    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:51:23.294615    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:51:23.310859    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:51:23.310934    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:51:23.321239    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:51:23.321318    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:51:23.337623    4746 logs.go:276] 0 containers: []
	W0917 10:51:23.337638    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:51:23.337712    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:51:23.348954    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:51:23.348970    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:51:23.348975    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:51:23.363635    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:51:23.363650    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:51:23.375683    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:51:23.375694    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:51:23.388229    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:51:23.388241    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:51:23.402831    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:51:23.402841    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:51:23.414669    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:51:23.414679    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:51:23.448165    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:51:23.448174    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:51:23.453070    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:51:23.453079    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:51:23.486735    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:51:23.486749    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:51:23.498487    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:51:23.498509    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:51:23.513663    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:51:23.513675    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:51:23.526210    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:51:23.526224    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:51:23.543253    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:51:23.543266    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:51:23.554797    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:51:23.554809    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:51:23.566792    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:51:23.566802    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:51:26.091322    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:31.093434    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:31.093529    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:51:31.104602    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:51:31.104694    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:51:31.115091    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:51:31.115177    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:51:31.125844    4746 logs.go:276] 4 containers: [72019332a1d8 d3af68a4aad3 f1d1743ca406 684381bbeb3a]
	I0917 10:51:31.125931    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:51:31.136324    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:51:31.136410    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:51:31.149113    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:51:31.149197    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:51:31.160973    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:51:31.161051    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:51:31.171134    4746 logs.go:276] 0 containers: []
	W0917 10:51:31.171144    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:51:31.171212    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:51:31.181607    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:51:31.181626    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:51:31.181632    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:51:31.186087    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:51:31.186096    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:51:31.200437    4746 logs.go:123] Gathering logs for coredns [d3af68a4aad3] ...
	I0917 10:51:31.200445    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3af68a4aad3"
	I0917 10:51:31.212772    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:51:31.212789    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:51:31.226382    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:51:31.226394    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:51:31.237932    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:51:31.237941    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:51:31.272639    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:51:31.272653    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:51:31.316165    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:51:31.316178    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:51:31.328180    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:51:31.328194    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:51:31.342053    4746 logs.go:123] Gathering logs for coredns [72019332a1d8] ...
	I0917 10:51:31.342067    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72019332a1d8"
	I0917 10:51:31.353010    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:51:31.353026    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:51:31.367008    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:51:31.367018    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:51:31.390153    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:51:31.390162    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:51:31.401780    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:51:31.401792    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:51:31.420631    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:51:31.420642    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:51:33.933532    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:38.935895    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:38.940776    4746 out.go:201] 
	W0917 10:51:38.944974    4746 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0917 10:51:38.944991    4746 out.go:270] * 
	* 
	W0917 10:51:38.946280    4746 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:51:38.956887    4746 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-161000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-17 10:51:39.07652 -0700 PDT m=+3378.118479918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-161000 -n running-upgrade-161000
E0917 10:51:45.333118    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-161000 -n running-upgrade-161000: exit status 2 (15.573023083s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-161000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-388000          | force-systemd-flag-388000 | jenkins | v1.34.0 | 17 Sep 24 10:41 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-460000              | force-systemd-env-460000  | jenkins | v1.34.0 | 17 Sep 24 10:41 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-460000           | force-systemd-env-460000  | jenkins | v1.34.0 | 17 Sep 24 10:41 PDT | 17 Sep 24 10:41 PDT |
	| start   | -p docker-flags-981000                | docker-flags-981000       | jenkins | v1.34.0 | 17 Sep 24 10:41 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-388000             | force-systemd-flag-388000 | jenkins | v1.34.0 | 17 Sep 24 10:41 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-388000          | force-systemd-flag-388000 | jenkins | v1.34.0 | 17 Sep 24 10:41 PDT | 17 Sep 24 10:41 PDT |
	| start   | -p cert-expiration-767000             | cert-expiration-767000    | jenkins | v1.34.0 | 17 Sep 24 10:41 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-981000 ssh               | docker-flags-981000       | jenkins | v1.34.0 | 17 Sep 24 10:42 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-981000 ssh               | docker-flags-981000       | jenkins | v1.34.0 | 17 Sep 24 10:42 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-981000                | docker-flags-981000       | jenkins | v1.34.0 | 17 Sep 24 10:42 PDT | 17 Sep 24 10:42 PDT |
	| start   | -p cert-options-437000                | cert-options-437000       | jenkins | v1.34.0 | 17 Sep 24 10:42 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-437000 ssh               | cert-options-437000       | jenkins | v1.34.0 | 17 Sep 24 10:42 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-437000 -- sudo        | cert-options-437000       | jenkins | v1.34.0 | 17 Sep 24 10:42 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-437000                | cert-options-437000       | jenkins | v1.34.0 | 17 Sep 24 10:42 PDT | 17 Sep 24 10:42 PDT |
	| start   | -p running-upgrade-161000             | minikube                  | jenkins | v1.26.0 | 17 Sep 24 10:42 PDT | 17 Sep 24 10:43 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-161000             | running-upgrade-161000    | jenkins | v1.34.0 | 17 Sep 24 10:43 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-767000             | cert-expiration-767000    | jenkins | v1.34.0 | 17 Sep 24 10:45 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-767000             | cert-expiration-767000    | jenkins | v1.34.0 | 17 Sep 24 10:45 PDT | 17 Sep 24 10:45 PDT |
	| start   | -p kubernetes-upgrade-875000          | kubernetes-upgrade-875000 | jenkins | v1.34.0 | 17 Sep 24 10:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-875000          | kubernetes-upgrade-875000 | jenkins | v1.34.0 | 17 Sep 24 10:45 PDT | 17 Sep 24 10:45 PDT |
	| start   | -p kubernetes-upgrade-875000          | kubernetes-upgrade-875000 | jenkins | v1.34.0 | 17 Sep 24 10:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-875000          | kubernetes-upgrade-875000 | jenkins | v1.34.0 | 17 Sep 24 10:45 PDT | 17 Sep 24 10:45 PDT |
	| start   | -p stopped-upgrade-293000             | minikube                  | jenkins | v1.26.0 | 17 Sep 24 10:45 PDT | 17 Sep 24 10:46 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-293000 stop           | minikube                  | jenkins | v1.26.0 | 17 Sep 24 10:46 PDT | 17 Sep 24 10:46 PDT |
	| start   | -p stopped-upgrade-293000             | stopped-upgrade-293000    | jenkins | v1.34.0 | 17 Sep 24 10:46 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 10:46:26
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 10:46:26.071112    4887 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:46:26.071275    4887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:46:26.071282    4887 out.go:358] Setting ErrFile to fd 2...
	I0917 10:46:26.071285    4887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:46:26.071436    4887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:46:26.072723    4887 out.go:352] Setting JSON to false
	I0917 10:46:26.091184    4887 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4549,"bootTime":1726590637,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:46:26.091250    4887 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:46:26.095204    4887 out.go:177] * [stopped-upgrade-293000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:46:26.103127    4887 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:46:26.103163    4887 notify.go:220] Checking for updates...
	I0917 10:46:26.110107    4887 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:46:26.113132    4887 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:46:26.116162    4887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:46:26.119103    4887 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:46:26.122175    4887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:46:26.125304    4887 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:46:26.128082    4887 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 10:46:26.131158    4887 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:46:26.134082    4887 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:46:26.141117    4887 start.go:297] selected driver: qemu2
	I0917 10:46:26.141127    4887 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50495 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 10:46:26.141198    4887 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:46:26.143982    4887 cni.go:84] Creating CNI manager for ""
	I0917 10:46:26.144014    4887 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:46:26.144041    4887 start.go:340] cluster config:
	{Name:stopped-upgrade-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50495 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 10:46:26.144092    4887 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:46:26.152087    4887 out.go:177] * Starting "stopped-upgrade-293000" primary control-plane node in "stopped-upgrade-293000" cluster
	I0917 10:46:26.156164    4887 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0917 10:46:26.156180    4887 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0917 10:46:26.156187    4887 cache.go:56] Caching tarball of preloaded images
	I0917 10:46:26.156259    4887 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:46:26.156265    4887 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0917 10:46:26.156320    4887 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/config.json ...
	I0917 10:46:26.156790    4887 start.go:360] acquireMachinesLock for stopped-upgrade-293000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:46:26.156825    4887 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "stopped-upgrade-293000"
	I0917 10:46:26.156833    4887 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:46:26.156840    4887 fix.go:54] fixHost starting: 
	I0917 10:46:26.156951    4887 fix.go:112] recreateIfNeeded on stopped-upgrade-293000: state=Stopped err=<nil>
	W0917 10:46:26.156959    4887 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:46:26.165146    4887 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-293000" ...
	I0917 10:46:26.868937    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:26.869154    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:26.885283    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:26.885382    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:26.904578    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:26.904663    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:26.914534    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:26.914617    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:26.925392    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:26.925465    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:26.935829    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:26.935908    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:26.952592    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:26.952668    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:26.962656    4746 logs.go:276] 0 containers: []
	W0917 10:46:26.962669    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:26.962737    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:26.980535    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:26.980552    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:26.980557    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:27.014607    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:27.014619    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:27.026171    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:27.026183    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:27.038376    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:27.038391    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:27.050423    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:27.050440    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:27.061716    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:27.061729    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:27.084054    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:27.084070    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:27.126399    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:27.126410    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:27.139549    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:27.139559    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:27.161213    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:27.161224    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:27.172536    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:27.172547    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:27.183295    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:27.183311    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:27.194923    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:27.194935    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:46:27.213094    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:27.213104    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:27.217661    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:27.217670    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:27.231157    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:27.231169    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:27.243165    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:27.243174    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:29.758987    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:46:26.169069    4887 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:46:26.169145    4887 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50461-:22,hostfwd=tcp::50462-:2376,hostname=stopped-upgrade-293000 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/disk.qcow2
	I0917 10:46:26.215430    4887 main.go:141] libmachine: STDOUT: 
	I0917 10:46:26.215458    4887 main.go:141] libmachine: STDERR: 
	I0917 10:46:26.215465    4887 main.go:141] libmachine: Waiting for VM to start (ssh -p 50461 docker@127.0.0.1)...
	I0917 10:46:34.761605    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:34.761723    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:34.772785    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:34.772869    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:34.784189    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:34.784278    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:34.794679    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:34.794756    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:34.805700    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:34.805782    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:34.816069    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:34.816143    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:34.826853    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:34.826926    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:34.837803    4746 logs.go:276] 0 containers: []
	W0917 10:46:34.837815    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:34.837885    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:34.848459    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:34.848484    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:34.848489    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:34.861623    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:34.861639    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:34.872926    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:34.872937    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:34.884646    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:34.884657    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:34.896551    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:34.896561    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:34.908936    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:34.908946    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:34.952765    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:34.952775    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:34.957600    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:34.957610    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:34.995850    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:34.995860    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:35.010507    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:35.010519    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:35.024367    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:35.024378    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:35.041410    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:35.041423    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:35.053695    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:35.053704    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:46:35.070805    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:35.070821    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:35.082471    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:35.082480    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:35.094559    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:35.094568    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:35.105760    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:35.105772    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:37.630544    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:46:42.632653    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:42.632900    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:42.650677    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:42.650791    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:42.664856    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:42.664947    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:42.676570    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:42.676651    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:42.687402    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:42.687489    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:42.698959    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:42.699051    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:42.712323    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:42.712403    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:42.725903    4746 logs.go:276] 0 containers: []
	W0917 10:46:42.725915    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:42.725986    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:42.739698    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:42.739715    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:42.739720    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:42.753112    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:42.753122    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:42.797876    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:42.797886    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:42.802819    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:42.802829    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:42.813805    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:42.813818    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:42.838552    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:42.838565    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:42.872209    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:42.872225    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:42.886351    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:42.886359    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:42.901666    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:42.901682    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:42.913146    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:42.913157    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:42.924647    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:42.924661    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:46:42.942023    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:42.942033    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:42.955082    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:42.955096    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:42.967535    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:42.967545    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:42.984791    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:42.984801    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:42.996098    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:42.996112    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:43.007583    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:43.007594    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:46.310628    4887 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/config.json ...
	I0917 10:46:46.311095    4887 machine.go:93] provisionDockerMachine start ...
	I0917 10:46:46.311188    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:46.311467    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:46.311477    4887 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:46:46.382352    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:46:46.382373    4887 buildroot.go:166] provisioning hostname "stopped-upgrade-293000"
	I0917 10:46:46.382449    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:46.382659    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:46.382671    4887 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-293000 && echo "stopped-upgrade-293000" | sudo tee /etc/hostname
	I0917 10:46:46.455302    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-293000
	
	I0917 10:46:46.455377    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:46.455516    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:46.455526    4887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-293000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-293000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-293000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:46:46.523867    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:46:46.523882    4887 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1312/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1312/.minikube}
	I0917 10:46:46.523890    4887 buildroot.go:174] setting up certificates
	I0917 10:46:46.523901    4887 provision.go:84] configureAuth start
	I0917 10:46:46.523909    4887 provision.go:143] copyHostCerts
	I0917 10:46:46.523979    4887 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1312/.minikube/key.pem, removing ...
	I0917 10:46:46.523988    4887 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1312/.minikube/key.pem
	I0917 10:46:46.524100    4887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1312/.minikube/key.pem (1679 bytes)
	I0917 10:46:46.524312    4887 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.pem, removing ...
	I0917 10:46:46.524317    4887 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.pem
	I0917 10:46:46.524380    4887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.pem (1078 bytes)
	I0917 10:46:46.524496    4887 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1312/.minikube/cert.pem, removing ...
	I0917 10:46:46.524500    4887 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1312/.minikube/cert.pem
	I0917 10:46:46.524553    4887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1312/.minikube/cert.pem (1123 bytes)
	I0917 10:46:46.524660    4887 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-293000 san=[127.0.0.1 localhost minikube stopped-upgrade-293000]
	I0917 10:46:46.630770    4887 provision.go:177] copyRemoteCerts
	I0917 10:46:46.630813    4887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:46:46.630821    4887 sshutil.go:53] new ssh client: &{IP:localhost Port:50461 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/id_rsa Username:docker}
	I0917 10:46:46.663556    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:46:46.670222    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 10:46:46.676807    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:46:46.684254    4887 provision.go:87] duration metric: took 160.34675ms to configureAuth
	I0917 10:46:46.684263    4887 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:46:46.684381    4887 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:46:46.684421    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:46.684518    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:46.684523    4887 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:46:46.742033    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:46:46.742044    4887 buildroot.go:70] root file system type: tmpfs
	I0917 10:46:46.742095    4887 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:46:46.742168    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:46.742296    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:46.742330    4887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:46:46.805196    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:46:46.805259    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:46.805378    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:46.805387    4887 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:46:47.180787    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:46:47.180802    4887 machine.go:96] duration metric: took 869.72525ms to provisionDockerMachine
	I0917 10:46:47.180810    4887 start.go:293] postStartSetup for "stopped-upgrade-293000" (driver="qemu2")
	I0917 10:46:47.180816    4887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:46:47.180884    4887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:46:47.180894    4887 sshutil.go:53] new ssh client: &{IP:localhost Port:50461 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/id_rsa Username:docker}
	I0917 10:46:47.214225    4887 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:46:47.215396    4887 info.go:137] Remote host: Buildroot 2021.02.12
	I0917 10:46:47.215404    4887 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1312/.minikube/addons for local assets ...
	I0917 10:46:47.215483    4887 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1312/.minikube/files for local assets ...
	I0917 10:46:47.215582    4887 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/ssl/certs/18402.pem -> 18402.pem in /etc/ssl/certs
	I0917 10:46:47.215674    4887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:46:47.218058    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/ssl/certs/18402.pem --> /etc/ssl/certs/18402.pem (1708 bytes)
	I0917 10:46:47.224723    4887 start.go:296] duration metric: took 43.909291ms for postStartSetup
	I0917 10:46:47.224739    4887 fix.go:56] duration metric: took 21.068553833s for fixHost
	I0917 10:46:47.224774    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:47.224879    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:47.224888    4887 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:46:47.282820    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726595207.290094088
	
	I0917 10:46:47.282828    4887 fix.go:216] guest clock: 1726595207.290094088
	I0917 10:46:47.282832    4887 fix.go:229] Guest: 2024-09-17 10:46:47.290094088 -0700 PDT Remote: 2024-09-17 10:46:47.224741 -0700 PDT m=+21.182421001 (delta=65.353088ms)
	I0917 10:46:47.282844    4887 fix.go:200] guest clock delta is within tolerance: 65.353088ms
	I0917 10:46:47.282847    4887 start.go:83] releasing machines lock for "stopped-upgrade-293000", held for 21.126671375s
	I0917 10:46:47.282921    4887 ssh_runner.go:195] Run: cat /version.json
	I0917 10:46:47.282921    4887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:46:47.282929    4887 sshutil.go:53] new ssh client: &{IP:localhost Port:50461 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/id_rsa Username:docker}
	I0917 10:46:47.282943    4887 sshutil.go:53] new ssh client: &{IP:localhost Port:50461 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/id_rsa Username:docker}
	W0917 10:46:47.283512    4887 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50461: connect: connection refused
	I0917 10:46:47.283532    4887 retry.go:31] will retry after 162.297452ms: dial tcp [::1]:50461: connect: connection refused
	W0917 10:46:47.482841    4887 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0917 10:46:47.482920    4887 ssh_runner.go:195] Run: systemctl --version
	I0917 10:46:47.485381    4887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 10:46:47.487830    4887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:46:47.487864    4887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0917 10:46:47.491756    4887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0917 10:46:47.497203    4887 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:46:47.497212    4887 start.go:495] detecting cgroup driver to use...
	I0917 10:46:47.497290    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:46:47.505632    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0917 10:46:47.508920    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:46:47.512129    4887 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:46:47.512155    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:46:47.515260    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:46:47.517967    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:46:47.520836    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:46:47.524323    4887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:46:47.527568    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:46:47.530481    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:46:47.533260    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:46:47.536646    4887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:46:47.539698    4887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:46:47.542251    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:47.622376    4887 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:46:47.633487    4887 start.go:495] detecting cgroup driver to use...
	I0917 10:46:47.633557    4887 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:46:47.638776    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:46:47.643410    4887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:46:47.651090    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:46:47.655727    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:46:47.660168    4887 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:46:47.716185    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:46:47.721594    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:46:47.726752    4887 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:46:47.728079    4887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:46:47.730948    4887 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0917 10:46:47.735963    4887 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:46:47.811705    4887 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:46:47.873569    4887 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:46:47.873630    4887 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:46:47.878968    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:47.954904    4887 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:46:49.085820    4887 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.130934333s)
	I0917 10:46:49.085897    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:46:49.090631    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:46:49.095055    4887 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:46:49.175831    4887 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:46:49.248923    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:49.331153    4887 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:46:49.337304    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:46:49.342316    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:49.418184    4887 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:46:49.455824    4887 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:46:49.455924    4887 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:46:49.458038    4887 start.go:563] Will wait 60s for crictl version
	I0917 10:46:49.458097    4887 ssh_runner.go:195] Run: which crictl
	I0917 10:46:49.459474    4887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:46:49.474058    4887 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0917 10:46:49.474142    4887 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:46:49.489904    4887 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:46:45.519290    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:46:49.510949    4887 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0917 10:46:49.511100    4887 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0917 10:46:49.512393    4887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:46:49.516466    4887 kubeadm.go:883] updating cluster {Name:stopped-upgrade-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50495 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0917 10:46:49.516518    4887 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0917 10:46:49.516571    4887 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:46:49.527672    4887 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 10:46:49.527680    4887 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0917 10:46:49.527733    4887 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 10:46:49.531103    4887 ssh_runner.go:195] Run: which lz4
	I0917 10:46:49.532303    4887 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 10:46:49.533538    4887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 10:46:49.533550    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0917 10:46:50.417118    4887 docker.go:649] duration metric: took 884.879459ms to copy over tarball
	I0917 10:46:50.417182    4887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 10:46:50.521947    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:50.522078    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:50.536423    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:50.536517    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:50.560481    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:50.560577    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:50.572902    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:50.572989    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:50.592451    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:50.592549    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:50.604481    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:50.604571    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:50.616242    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:50.616348    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:50.632336    4746 logs.go:276] 0 containers: []
	W0917 10:46:50.632350    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:50.632435    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:50.646022    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:50.646040    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:50.646046    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:50.661409    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:50.661427    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:50.699688    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:50.699706    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:50.723618    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:50.723637    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:50.741926    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:50.741940    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:50.755968    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:50.755983    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:50.777909    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:50.777925    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:50.792166    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:50.792179    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:50.836173    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:50.836194    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:46:50.855409    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:50.855423    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:50.868424    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:50.868437    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:50.873320    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:50.873331    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:50.890632    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:50.890644    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:50.904126    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:50.904138    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:50.917304    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:50.917316    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:50.930189    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:50.930200    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:50.954713    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:50.954728    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:53.498093    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:46:51.606313    4887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.189154459s)
	I0917 10:46:51.606326    4887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 10:46:51.621630    4887 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 10:46:51.624860    4887 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0917 10:46:51.629987    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:51.707451    4887 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:46:53.144182    4887 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.43675925s)
	I0917 10:46:53.144305    4887 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:46:53.159053    4887 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 10:46:53.159063    4887 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0917 10:46:53.159068    4887 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 10:46:53.163962    4887 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:46:53.166448    4887 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:46:53.168169    4887 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:46:53.168687    4887 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:46:53.171039    4887 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:46:53.171465    4887 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:46:53.172147    4887 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:46:53.172146    4887 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:46:53.173816    4887 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0917 10:46:53.175133    4887 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0917 10:46:53.175140    4887 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:46:53.176421    4887 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0917 10:46:53.176437    4887 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:46:53.176391    4887 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:46:53.178379    4887 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0917 10:46:53.179089    4887 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:46:53.599534    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:46:53.610704    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:46:53.615260    4887 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0917 10:46:53.615297    4887 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:46:53.615364    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:46:53.621154    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:46:53.629065    4887 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0917 10:46:53.629075    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0917 10:46:53.629086    4887 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:46:53.629138    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:46:53.635037    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0917 10:46:53.640604    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:46:53.646369    4887 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0917 10:46:53.646386    4887 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:46:53.646434    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:46:53.646695    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0917 10:46:53.659835    4887 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0917 10:46:53.659854    4887 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0917 10:46:53.659915    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0917 10:46:53.669468    4887 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0917 10:46:53.669475    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0917 10:46:53.669492    4887 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:46:53.669554    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:46:53.676873    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0917 10:46:53.685049    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0917 10:46:53.685095    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0917 10:46:53.685186    4887 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0917 10:46:53.690816    4887 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0917 10:46:53.690837    4887 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0917 10:46:53.690907    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0917 10:46:53.691460    4887 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0917 10:46:53.691471    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0917 10:46:53.703106    4887 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0917 10:46:53.703252    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:46:53.716130    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0917 10:46:53.716263    4887 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0917 10:46:53.726090    4887 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0917 10:46:53.726116    4887 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:46:53.726193    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:46:53.726764    4887 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0917 10:46:53.726782    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0917 10:46:53.745878    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0917 10:46:53.746006    4887 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0917 10:46:53.761284    4887 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0917 10:46:53.761311    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0917 10:46:53.768695    4887 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0917 10:46:53.768709    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0917 10:46:53.846546    4887 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0917 10:46:53.859844    4887 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0917 10:46:53.859860    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0917 10:46:53.976760    4887 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0917 10:46:54.021641    4887 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0917 10:46:54.021765    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:46:54.024499    4887 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0917 10:46:54.024508    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0917 10:46:54.036108    4887 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0917 10:46:54.036136    4887 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:46:54.036213    4887 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:46:54.173280    4887 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0917 10:46:54.173298    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 10:46:54.173426    4887 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0917 10:46:54.174858    4887 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0917 10:46:54.174868    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0917 10:46:54.201251    4887 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0917 10:46:54.201273    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0917 10:46:54.438904    4887 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0917 10:46:54.438948    4887 cache_images.go:92] duration metric: took 1.2799125s to LoadCachedImages
	W0917 10:46:54.438994    4887 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0917 10:46:54.439002    4887 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0917 10:46:54.439050    4887 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-293000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:46:54.439126    4887 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 10:46:54.452601    4887 cni.go:84] Creating CNI manager for ""
	I0917 10:46:54.452614    4887 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:46:54.452627    4887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 10:46:54.452635    4887 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-293000 NodeName:stopped-upgrade-293000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 10:46:54.452696    4887 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-293000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 10:46:54.452758    4887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0917 10:46:54.455974    4887 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:46:54.456010    4887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 10:46:54.459026    4887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0917 10:46:54.464843    4887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:46:54.469772    4887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0917 10:46:54.474866    4887 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0917 10:46:54.476112    4887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:46:54.479944    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:54.561695    4887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:46:54.571584    4887 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000 for IP: 10.0.2.15
	I0917 10:46:54.571594    4887 certs.go:194] generating shared ca certs ...
	I0917 10:46:54.571603    4887 certs.go:226] acquiring lock for ca certs: {Name:mk1d9837d65f8f1762ad8daf2cfbb53face1f201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:46:54.571764    4887 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.key
	I0917 10:46:54.571803    4887 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.key
	I0917 10:46:54.571809    4887 certs.go:256] generating profile certs ...
	I0917 10:46:54.571881    4887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/client.key
	I0917 10:46:54.571900    4887 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.key.adb24236
	I0917 10:46:54.571912    4887 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.crt.adb24236 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0917 10:46:54.637794    4887 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.crt.adb24236 ...
	I0917 10:46:54.637809    4887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.crt.adb24236: {Name:mk34090c95e504420b3662e3619686681165024e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:46:54.638120    4887 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.key.adb24236 ...
	I0917 10:46:54.638125    4887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.key.adb24236: {Name:mk506bcbcf66d39a99d777a5b316d23fed4c628b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:46:54.638265    4887 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.crt.adb24236 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.crt
	I0917 10:46:54.638397    4887 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.key.adb24236 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.key
	I0917 10:46:54.638533    4887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/proxy-client.key
	I0917 10:46:54.638668    4887 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/1840.pem (1338 bytes)
	W0917 10:46:54.638689    4887 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/1840_empty.pem, impossibly tiny 0 bytes
	I0917 10:46:54.638696    4887 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:46:54.638715    4887 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:46:54.638733    4887 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:46:54.638753    4887 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/key.pem (1679 bytes)
	I0917 10:46:54.638791    4887 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/ssl/certs/18402.pem (1708 bytes)
	I0917 10:46:54.639126    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:46:54.646181    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:46:54.652969    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:46:54.660115    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:46:54.666835    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 10:46:54.673633    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:46:54.680450    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:46:54.687834    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:46:54.695260    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/ssl/certs/18402.pem --> /usr/share/ca-certificates/18402.pem (1708 bytes)
	I0917 10:46:54.702445    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:46:54.709316    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/1840.pem --> /usr/share/ca-certificates/1840.pem (1338 bytes)
	I0917 10:46:54.716290    4887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 10:46:54.721329    4887 ssh_runner.go:195] Run: openssl version
	I0917 10:46:54.723293    4887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18402.pem && ln -fs /usr/share/ca-certificates/18402.pem /etc/ssl/certs/18402.pem"
	I0917 10:46:54.726367    4887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18402.pem
	I0917 10:46:54.727730    4887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:11 /usr/share/ca-certificates/18402.pem
	I0917 10:46:54.727758    4887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18402.pem
	I0917 10:46:54.729616    4887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18402.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:46:54.732683    4887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:46:54.736042    4887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:46:54.737592    4887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:46:54.737624    4887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:46:54.739323    4887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:46:54.742094    4887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1840.pem && ln -fs /usr/share/ca-certificates/1840.pem /etc/ssl/certs/1840.pem"
	I0917 10:46:54.744969    4887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1840.pem
	I0917 10:46:54.746297    4887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:11 /usr/share/ca-certificates/1840.pem
	I0917 10:46:54.746320    4887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1840.pem
	I0917 10:46:54.747955    4887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1840.pem /etc/ssl/certs/51391683.0"
	I0917 10:46:54.751328    4887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:46:54.752755    4887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:46:54.754993    4887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:46:54.756906    4887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:46:54.759024    4887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:46:54.760741    4887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:46:54.762520    4887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:46:54.764418    4887 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50495 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 10:46:54.764491    4887 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:46:54.774726    4887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 10:46:54.777915    4887 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 10:46:54.777927    4887 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 10:46:54.777959    4887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 10:46:54.780875    4887 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:46:54.781155    4887 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-293000" does not appear in /Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:46:54.781250    4887 kubeconfig.go:62] /Users/jenkins/minikube-integration/19662-1312/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-293000" cluster setting kubeconfig missing "stopped-upgrade-293000" context setting]
	I0917 10:46:54.781474    4887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/kubeconfig: {Name:mk31f3a4e5ba5b55f1c245ae17bd3947ee606141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:46:54.781925    4887 kapi.go:59] client config for stopped-upgrade-293000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10421d800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:46:54.782261    4887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 10:46:54.784862    4887 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-293000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0917 10:46:54.784869    4887 kubeadm.go:1160] stopping kube-system containers ...
	I0917 10:46:54.784922    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:46:54.795348    4887 docker.go:483] Stopping containers: [06f0615ccfda 7d102603a586 98b0c48c9735 4dabcabdd1a5 185cd67f41ca 8865fe51a3f3 e9458d99309c b0315bdc1db3]
	I0917 10:46:54.795431    4887 ssh_runner.go:195] Run: docker stop 06f0615ccfda 7d102603a586 98b0c48c9735 4dabcabdd1a5 185cd67f41ca 8865fe51a3f3 e9458d99309c b0315bdc1db3
	I0917 10:46:54.806495    4887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 10:46:54.812127    4887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 10:46:54.815362    4887 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 10:46:54.815368    4887 kubeadm.go:157] found existing configuration files:
	
	I0917 10:46:54.815396    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/admin.conf
	I0917 10:46:54.817957    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 10:46:54.817984    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 10:46:54.820780    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/kubelet.conf
	I0917 10:46:54.823764    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 10:46:54.823788    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 10:46:54.826520    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/controller-manager.conf
	I0917 10:46:54.828948    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 10:46:54.828973    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 10:46:54.832142    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/scheduler.conf
	I0917 10:46:54.835229    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 10:46:54.835261    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 10:46:54.837980    4887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 10:46:54.840806    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:46:54.863550    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:46:55.346578    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:46:55.472938    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:46:55.496660    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:46:55.520961    4887 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:46:55.521045    4887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:46:56.023343    4887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:46:58.500130    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:46:58.500274    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:46:58.512394    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:46:58.512473    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:46:58.523376    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:46:58.523460    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:46:58.534363    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:46:58.534445    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:46:58.545592    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:46:58.545680    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:46:58.556476    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:46:58.556563    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:46:58.567592    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:46:58.567677    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:46:58.578153    4746 logs.go:276] 0 containers: []
	W0917 10:46:58.578166    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:46:58.578244    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:46:58.589115    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:46:58.589134    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:46:58.589139    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:46:58.628138    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:46:58.628151    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:46:58.640506    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:46:58.640522    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:46:58.653248    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:46:58.653261    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:46:58.665858    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:46:58.665871    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:46:58.680947    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:46:58.680958    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:46:58.693256    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:46:58.693269    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:46:58.704880    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:46:58.704891    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:46:58.717060    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:46:58.717073    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:46:58.726225    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:46:58.726236    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:46:58.740934    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:46:58.740951    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:46:58.752635    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:46:58.752647    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:46:58.773535    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:46:58.773545    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:46:58.797740    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:46:58.797751    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:46:58.841978    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:46:58.841998    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:46:58.854615    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:46:58.854625    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:46:58.868151    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:46:58.868167    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:46:56.523087    4887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:46:56.528093    4887 api_server.go:72] duration metric: took 1.007164167s to wait for apiserver process to appear ...
	I0917 10:46:56.528105    4887 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:46:56.528115    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:01.387236    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:01.530081    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:01.530108    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:06.389341    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:06.389611    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:47:06.410459    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:47:06.410574    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:47:06.426195    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:47:06.426311    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:47:06.438628    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:47:06.438711    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:47:06.449198    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:47:06.449277    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:47:06.459645    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:47:06.459717    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:47:06.470391    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:47:06.470472    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:47:06.480872    4746 logs.go:276] 0 containers: []
	W0917 10:47:06.480884    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:47:06.480954    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:47:06.491359    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:47:06.491376    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:47:06.491382    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:47:06.502618    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:47:06.502629    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:47:06.537127    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:47:06.537138    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:47:06.551192    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:47:06.551203    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:47:06.567466    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:47:06.567477    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:47:06.584362    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:47:06.584373    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:47:06.597496    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:47:06.597511    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:47:06.608940    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:47:06.608953    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:47:06.620356    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:47:06.620369    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:47:06.664400    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:47:06.664418    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:47:06.679962    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:47:06.679973    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:47:06.692452    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:47:06.692462    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:47:06.707563    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:47:06.707575    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:47:06.719726    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:47:06.719736    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:47:06.727829    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:47:06.727839    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:47:06.739979    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:47:06.739991    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:47:06.751306    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:47:06.751322    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:47:09.278093    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:06.530465    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:06.530489    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:14.278776    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:14.278906    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:47:14.291105    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:47:14.291184    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:47:14.302609    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:47:14.302695    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:47:14.313959    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:47:14.314046    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:47:14.325198    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:47:14.325280    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:47:14.336066    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:47:14.336156    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:47:14.349712    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:47:14.349798    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:47:14.364965    4746 logs.go:276] 0 containers: []
	W0917 10:47:14.364978    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:47:14.365055    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:47:14.375846    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:47:14.375862    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:47:14.375867    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:47:14.410109    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:47:14.410120    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:47:14.425039    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:47:14.425055    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:47:14.438331    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:47:14.438345    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:47:14.449730    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:47:14.449743    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:47:14.469289    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:47:14.469302    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:47:14.480925    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:47:14.480936    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:47:14.485387    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:47:14.485395    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:47:14.503163    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:47:14.503179    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:47:14.516284    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:47:14.516294    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:47:14.559348    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:47:14.559368    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:47:14.570315    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:47:14.570328    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:47:14.581587    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:47:14.581597    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:47:14.593281    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:47:14.593293    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:47:14.605883    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:47:14.605893    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:47:14.623592    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:47:14.623601    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:47:14.644976    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:47:14.644987    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:47:11.530763    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:11.530818    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:17.164929    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:16.531463    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:16.531541    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:22.167058    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:22.167307    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:47:22.193609    4746 logs.go:276] 2 containers: [7a70838976e2 6926756d5005]
	I0917 10:47:22.193727    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:47:22.209534    4746 logs.go:276] 2 containers: [216d2144d1a2 780ad08d4d6c]
	I0917 10:47:22.209629    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:47:22.222021    4746 logs.go:276] 1 containers: [4fc227e49c92]
	I0917 10:47:22.222116    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:47:22.233703    4746 logs.go:276] 2 containers: [d151f1d9df5b 6423b17eb0f9]
	I0917 10:47:22.233784    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:47:22.244241    4746 logs.go:276] 1 containers: [401c0b7782d8]
	I0917 10:47:22.244312    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:47:22.254632    4746 logs.go:276] 2 containers: [52d5aafbabbf 2e047c9d171f]
	I0917 10:47:22.254713    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:47:22.265770    4746 logs.go:276] 0 containers: []
	W0917 10:47:22.265781    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:47:22.265845    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:47:22.276939    4746 logs.go:276] 2 containers: [989478b5a2ee d45af76446cf]
	I0917 10:47:22.276957    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:47:22.276963    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:47:22.312935    4746 logs.go:123] Gathering logs for kube-apiserver [6926756d5005] ...
	I0917 10:47:22.312944    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6926756d5005"
	I0917 10:47:22.325045    4746 logs.go:123] Gathering logs for etcd [780ad08d4d6c] ...
	I0917 10:47:22.325055    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780ad08d4d6c"
	I0917 10:47:22.338701    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:47:22.338711    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:47:22.361006    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:47:22.361014    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:47:22.372866    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:47:22.372880    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:47:22.414412    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:47:22.414419    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:47:22.418766    4746 logs.go:123] Gathering logs for etcd [216d2144d1a2] ...
	I0917 10:47:22.418772    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 216d2144d1a2"
	I0917 10:47:22.473715    4746 logs.go:123] Gathering logs for coredns [4fc227e49c92] ...
	I0917 10:47:22.473730    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc227e49c92"
	I0917 10:47:22.484964    4746 logs.go:123] Gathering logs for kube-scheduler [d151f1d9df5b] ...
	I0917 10:47:22.484978    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d151f1d9df5b"
	I0917 10:47:22.500958    4746 logs.go:123] Gathering logs for kube-scheduler [6423b17eb0f9] ...
	I0917 10:47:22.500969    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423b17eb0f9"
	I0917 10:47:22.512424    4746 logs.go:123] Gathering logs for kube-proxy [401c0b7782d8] ...
	I0917 10:47:22.512436    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 401c0b7782d8"
	I0917 10:47:22.523773    4746 logs.go:123] Gathering logs for kube-controller-manager [52d5aafbabbf] ...
	I0917 10:47:22.523787    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d5aafbabbf"
	I0917 10:47:22.541319    4746 logs.go:123] Gathering logs for storage-provisioner [989478b5a2ee] ...
	I0917 10:47:22.541329    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989478b5a2ee"
	I0917 10:47:22.552954    4746 logs.go:123] Gathering logs for storage-provisioner [d45af76446cf] ...
	I0917 10:47:22.552968    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d45af76446cf"
	I0917 10:47:22.567848    4746 logs.go:123] Gathering logs for kube-apiserver [7a70838976e2] ...
	I0917 10:47:22.567861    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a70838976e2"
	I0917 10:47:22.581360    4746 logs.go:123] Gathering logs for kube-controller-manager [2e047c9d171f] ...
	I0917 10:47:22.581371    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e047c9d171f"
	I0917 10:47:21.532310    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:21.532385    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:25.094963    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:26.533796    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:26.533898    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:30.095930    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:30.095981    4746 kubeadm.go:597] duration metric: took 4m4.224001291s to restartPrimaryControlPlane
	W0917 10:47:30.096016    4746 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 10:47:30.096034    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0917 10:47:31.123448    4746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.027434042s)
	I0917 10:47:31.123530    4746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:47:31.128585    4746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 10:47:31.131457    4746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 10:47:31.134000    4746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 10:47:31.134006    4746 kubeadm.go:157] found existing configuration files:
	
	I0917 10:47:31.134030    4746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/admin.conf
	I0917 10:47:31.137084    4746 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 10:47:31.137111    4746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 10:47:31.140249    4746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/kubelet.conf
	I0917 10:47:31.142732    4746 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 10:47:31.142758    4746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 10:47:31.145626    4746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/controller-manager.conf
	I0917 10:47:31.148500    4746 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 10:47:31.148524    4746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 10:47:31.151009    4746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/scheduler.conf
	I0917 10:47:31.153762    4746 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 10:47:31.153788    4746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 10:47:31.156821    4746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 10:47:31.175422    4746 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0917 10:47:31.175454    4746 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 10:47:31.222330    4746 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 10:47:31.222399    4746 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 10:47:31.222456    4746 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 10:47:31.275817    4746 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 10:47:31.280003    4746 out.go:235]   - Generating certificates and keys ...
	I0917 10:47:31.280041    4746 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 10:47:31.280075    4746 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 10:47:31.280116    4746 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 10:47:31.280152    4746 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 10:47:31.280188    4746 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 10:47:31.280218    4746 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 10:47:31.280253    4746 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 10:47:31.280288    4746 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 10:47:31.280331    4746 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 10:47:31.280375    4746 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 10:47:31.280400    4746 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 10:47:31.280436    4746 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 10:47:31.372090    4746 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 10:47:31.480128    4746 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 10:47:31.608937    4746 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 10:47:31.701806    4746 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 10:47:31.735700    4746 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 10:47:31.736097    4746 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 10:47:31.736186    4746 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 10:47:31.822442    4746 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 10:47:31.825354    4746 out.go:235]   - Booting up control plane ...
	I0917 10:47:31.825404    4746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 10:47:31.825442    4746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 10:47:31.825482    4746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 10:47:31.825553    4746 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 10:47:31.826449    4746 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 10:47:31.535542    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:31.535563    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:36.835766    4746 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.008977 seconds
	I0917 10:47:36.835866    4746 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 10:47:36.842283    4746 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 10:47:37.352059    4746 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 10:47:37.352154    4746 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-161000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 10:47:37.862038    4746 kubeadm.go:310] [bootstrap-token] Using token: il327p.updajlxgrwyov07z
	I0917 10:47:37.864824    4746 out.go:235]   - Configuring RBAC rules ...
	I0917 10:47:37.864919    4746 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 10:47:37.865873    4746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 10:47:37.873713    4746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 10:47:37.875254    4746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 10:47:37.876593    4746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 10:47:37.877860    4746 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 10:47:37.882973    4746 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 10:47:38.054906    4746 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 10:47:38.267520    4746 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 10:47:38.268018    4746 kubeadm.go:310] 
	I0917 10:47:38.268047    4746 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 10:47:38.268051    4746 kubeadm.go:310] 
	I0917 10:47:38.268087    4746 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 10:47:38.268091    4746 kubeadm.go:310] 
	I0917 10:47:38.268106    4746 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 10:47:38.268138    4746 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 10:47:38.268171    4746 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 10:47:38.268174    4746 kubeadm.go:310] 
	I0917 10:47:38.268202    4746 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 10:47:38.268209    4746 kubeadm.go:310] 
	I0917 10:47:38.268230    4746 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 10:47:38.268232    4746 kubeadm.go:310] 
	I0917 10:47:38.268255    4746 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 10:47:38.268302    4746 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 10:47:38.268377    4746 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 10:47:38.268383    4746 kubeadm.go:310] 
	I0917 10:47:38.268435    4746 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 10:47:38.268475    4746 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 10:47:38.268480    4746 kubeadm.go:310] 
	I0917 10:47:38.268520    4746 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token il327p.updajlxgrwyov07z \
	I0917 10:47:38.268595    4746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:36041a92e029834f33dc421547a4417b75c39ebfd82ce914924ecffa9817b69d \
	I0917 10:47:38.268609    4746 kubeadm.go:310] 	--control-plane 
	I0917 10:47:38.268613    4746 kubeadm.go:310] 
	I0917 10:47:38.268657    4746 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 10:47:38.268665    4746 kubeadm.go:310] 
	I0917 10:47:38.268700    4746 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token il327p.updajlxgrwyov07z \
	I0917 10:47:38.268751    4746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:36041a92e029834f33dc421547a4417b75c39ebfd82ce914924ecffa9817b69d 
	I0917 10:47:38.268832    4746 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 10:47:38.268840    4746 cni.go:84] Creating CNI manager for ""
	I0917 10:47:38.268849    4746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:47:38.270304    4746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 10:47:38.277068    4746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 10:47:38.280778    4746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 10:47:38.286262    4746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 10:47:38.286324    4746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 10:47:38.286332    4746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-161000 minikube.k8s.io/updated_at=2024_09_17T10_47_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=running-upgrade-161000 minikube.k8s.io/primary=true
	I0917 10:47:38.332797    4746 ops.go:34] apiserver oom_adj: -16
	I0917 10:47:38.332826    4746 kubeadm.go:1113] duration metric: took 46.557458ms to wait for elevateKubeSystemPrivileges
	I0917 10:47:38.332841    4746 kubeadm.go:394] duration metric: took 4m12.485961333s to StartCluster
	I0917 10:47:38.332851    4746 settings.go:142] acquiring lock: {Name:mk01dda79792b7eaa96d8ee72bfae59b39d5fab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:47:38.332937    4746 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:47:38.333352    4746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/kubeconfig: {Name:mk31f3a4e5ba5b55f1c245ae17bd3947ee606141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:47:38.333567    4746 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:47:38.333621    4746 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 10:47:38.333658    4746 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-161000"
	I0917 10:47:38.333668    4746 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-161000"
	W0917 10:47:38.333671    4746 addons.go:243] addon storage-provisioner should already be in state true
	I0917 10:47:38.333646    4746 config.go:182] Loaded profile config "running-upgrade-161000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:47:38.333686    4746 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-161000"
	I0917 10:47:38.333691    4746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-161000"
	I0917 10:47:38.333684    4746 host.go:66] Checking if "running-upgrade-161000" exists ...
	I0917 10:47:38.334574    4746 kapi.go:59] client config for running-upgrade-161000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/running-upgrade-161000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043f1800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:47:38.334699    4746 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-161000"
	W0917 10:47:38.334704    4746 addons.go:243] addon default-storageclass should already be in state true
	I0917 10:47:38.334710    4746 host.go:66] Checking if "running-upgrade-161000" exists ...
	I0917 10:47:38.338022    4746 out.go:177] * Verifying Kubernetes components...
	I0917 10:47:38.338398    4746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 10:47:38.342149    4746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 10:47:38.342156    4746 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/running-upgrade-161000/id_rsa Username:docker}
	I0917 10:47:38.344933    4746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:47:38.348996    4746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:47:38.353063    4746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 10:47:38.353070    4746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 10:47:38.353076    4746 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/running-upgrade-161000/id_rsa Username:docker}
	I0917 10:47:38.438559    4746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:47:38.445498    4746 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:47:38.445557    4746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:47:38.449691    4746 api_server.go:72] duration metric: took 116.117167ms to wait for apiserver process to appear ...
	I0917 10:47:38.449699    4746 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:47:38.449706    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:38.460265    4746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 10:47:38.530347    4746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 10:47:38.803518    4746 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 10:47:38.803533    4746 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 10:47:36.537075    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:36.537097    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:43.451753    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:43.451853    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:41.539146    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:41.539184    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:48.452487    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:48.452508    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:46.540553    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:46.540588    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:53.452831    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:53.452856    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:51.542740    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:51.542795    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:58.453572    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:58.453608    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:56.543370    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:56.543584    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:47:56.557859    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:47:56.557958    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:47:56.572358    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:47:56.572459    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:47:56.582931    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:47:56.583017    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:47:56.592672    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:47:56.592760    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:47:56.603072    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:47:56.603154    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:47:56.617800    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:47:56.617878    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:47:56.628439    4887 logs.go:276] 0 containers: []
	W0917 10:47:56.628453    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:47:56.628519    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:47:56.640383    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:47:56.640400    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:47:56.640406    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:47:56.653058    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:47:56.653070    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:47:56.664010    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:47:56.664023    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:47:56.770516    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:47:56.770528    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:47:56.797534    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:47:56.797546    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:47:56.809743    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:47:56.809754    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:47:56.835380    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:47:56.835388    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:47:56.847015    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:47:56.847032    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:47:56.851225    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:47:56.851231    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:47:56.866972    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:47:56.866986    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:47:56.883857    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:47:56.883870    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:47:56.903139    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:47:56.903152    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:47:56.914951    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:47:56.914963    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:47:56.954379    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:47:56.954388    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:47:56.969830    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:47:56.969843    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:47:56.981439    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:47:56.981453    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:47:56.993752    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:47:56.993763    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:47:59.513431    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:03.454337    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:03.454404    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:04.515535    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:04.515712    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:04.531729    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:04.531809    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:04.547948    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:04.548022    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:04.558187    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:04.558270    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:04.568991    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:04.569076    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:04.579269    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:04.579355    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:04.590110    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:04.590196    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:04.600547    4887 logs.go:276] 0 containers: []
	W0917 10:48:04.600560    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:04.600636    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:04.611692    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:04.611710    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:04.611716    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:04.649296    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:04.649311    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:04.663593    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:04.663602    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:04.679155    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:04.679168    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:04.691223    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:04.691233    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:04.708273    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:04.708284    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:04.723042    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:04.723053    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:04.737352    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:04.737362    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:04.741934    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:04.741940    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:04.754368    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:04.754378    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:04.766278    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:04.766293    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:04.789880    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:04.789887    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:04.827350    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:04.827356    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:04.841948    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:04.841959    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:04.867114    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:04.867125    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:04.879026    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:04.879036    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:04.896888    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:04.896897    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:08.455426    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:08.455462    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0917 10:48:08.804914    4746 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0917 10:48:08.809270    4746 out.go:177] * Enabled addons: storage-provisioner
	I0917 10:48:08.817114    4746 addons.go:510] duration metric: took 30.484433542s for enable addons: enabled=[storage-provisioner]
	I0917 10:48:07.410657    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:13.456731    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:13.456779    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:12.413116    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:12.413432    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:12.437443    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:12.437563    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:12.453238    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:12.453330    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:12.467255    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:12.467337    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:12.477612    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:12.477694    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:12.488157    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:12.488237    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:12.499176    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:12.499259    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:12.509746    4887 logs.go:276] 0 containers: []
	W0917 10:48:12.509759    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:12.509833    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:12.520522    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:12.520541    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:12.520546    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:12.536988    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:12.537003    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:12.551132    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:12.551141    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:12.562743    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:12.562754    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:12.597777    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:12.597788    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:12.622738    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:12.622754    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:12.635454    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:12.635466    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:12.672921    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:12.672931    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:12.677552    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:12.677558    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:12.694632    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:12.694646    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:12.707137    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:12.707147    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:12.732033    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:12.732045    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:12.745591    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:12.745604    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:12.760241    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:12.760254    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:12.771257    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:12.771273    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:12.788115    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:12.788125    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:12.801281    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:12.801293    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:15.315905    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:18.458477    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:18.458509    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:20.318043    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:20.318165    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:20.330576    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:20.330660    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:20.341008    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:20.341095    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:20.351427    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:20.351510    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:20.361976    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:20.362067    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:20.374323    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:20.374403    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:20.384712    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:20.384805    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:20.394564    4887 logs.go:276] 0 containers: []
	W0917 10:48:20.394577    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:20.394646    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:20.405004    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:20.405023    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:20.405029    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:20.417339    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:20.417349    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:20.431676    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:20.431691    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:20.449096    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:20.449105    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:20.465335    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:20.465350    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:20.477352    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:20.477362    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:20.488730    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:20.488741    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:20.512746    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:20.512756    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:20.524298    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:20.524310    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:20.536483    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:20.536494    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:20.573299    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:20.573306    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:20.613455    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:20.613468    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:20.627484    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:20.627494    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:20.653250    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:20.653263    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:20.667523    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:20.667539    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:20.671808    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:20.671814    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:20.683050    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:20.683060    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:23.460619    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:23.460648    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:23.197226    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:28.462727    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:28.462746    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:28.199348    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:28.199518    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:28.211961    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:28.212054    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:28.223148    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:28.223236    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:28.233927    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:28.234008    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:28.244285    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:28.244374    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:28.262875    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:28.262956    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:28.273247    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:28.273321    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:28.284109    4887 logs.go:276] 0 containers: []
	W0917 10:48:28.284121    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:28.284193    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:28.294592    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:28.294607    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:28.294612    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:28.308940    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:28.308955    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:28.344002    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:28.344013    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:28.359086    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:28.359096    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:28.386910    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:28.386925    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:28.400065    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:28.400082    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:28.438887    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:28.438897    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:28.443184    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:28.443192    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:28.455492    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:28.455503    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:28.474319    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:28.474329    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:28.499393    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:28.499403    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:28.510915    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:28.510927    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:28.522364    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:28.522376    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:28.533691    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:28.533702    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:28.545482    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:28.545496    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:28.556750    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:28.556760    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:28.570936    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:28.570947    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:33.464768    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:33.464798    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:31.086940    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:38.465007    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:38.465138    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:38.475963    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:48:38.476046    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:38.486418    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:48:38.486494    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:38.496992    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:48:38.497083    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:38.507532    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:48:38.507605    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:38.518120    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:48:38.518195    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:38.528580    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:48:38.528664    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:38.538805    4746 logs.go:276] 0 containers: []
	W0917 10:48:38.538817    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:38.538893    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:38.550686    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:48:38.550699    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:38.550704    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:38.555551    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:38.555559    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:38.592167    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:48:38.592182    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:48:38.603861    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:48:38.603871    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:48:38.620193    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:38.620204    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:38.643620    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:48:38.643631    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:38.654544    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:38.654555    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:38.687684    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:48:38.687693    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:48:38.701378    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:48:38.701389    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:48:38.713323    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:48:38.713338    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:48:38.731246    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:48:38.731257    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:48:38.749387    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:48:38.749397    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:48:38.761834    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:48:38.761847    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:48:36.089223    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:36.089453    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:36.106738    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:36.106850    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:36.119877    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:36.119968    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:36.130886    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:36.130962    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:36.143421    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:36.143496    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:36.153786    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:36.153858    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:36.165149    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:36.165223    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:36.175517    4887 logs.go:276] 0 containers: []
	W0917 10:48:36.175528    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:36.175599    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:36.185868    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:36.185888    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:36.185893    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:36.199632    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:36.199647    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:36.212085    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:36.212096    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:36.250293    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:36.250302    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:36.276559    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:36.276571    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:36.291677    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:36.291689    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:36.305586    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:36.305602    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:36.330522    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:36.330530    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:36.349242    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:36.349258    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:36.363644    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:36.363660    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:36.375483    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:36.375492    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:36.411589    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:36.411599    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:36.432344    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:36.432355    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:36.443910    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:36.443926    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:36.454925    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:36.454938    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:36.459470    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:36.459476    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:36.473564    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:36.473576    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:38.992059    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:41.278072    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:43.993736    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:43.994045    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:44.016987    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:44.017129    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:44.034142    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:44.034238    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:44.049106    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:44.049193    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:44.063760    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:44.063842    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:44.074135    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:44.074223    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:44.084312    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:44.084385    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:44.094533    4887 logs.go:276] 0 containers: []
	W0917 10:48:44.094545    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:44.094618    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:44.105431    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:44.105447    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:44.105452    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:44.130702    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:44.130714    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:44.144783    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:44.144796    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:44.157128    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:44.157141    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:44.180397    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:44.180408    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:44.220203    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:44.220212    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:44.224714    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:44.224723    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:44.239354    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:44.239364    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:44.251069    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:44.251079    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:44.262123    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:44.262132    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:44.298025    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:44.298037    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:44.312498    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:44.312509    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:44.330981    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:44.330993    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:44.349312    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:44.349331    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:44.367874    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:44.367889    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:44.379263    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:44.379274    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:44.390988    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:44.391000    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:46.280521    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:46.280759    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:46.305631    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:48:46.305752    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:46.320890    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:48:46.320988    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:46.334828    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:48:46.334911    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:46.345833    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:48:46.345924    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:46.357344    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:48:46.357425    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:46.368082    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:48:46.368169    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:46.378599    4746 logs.go:276] 0 containers: []
	W0917 10:48:46.378612    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:46.378686    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:46.388327    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:48:46.388348    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:48:46.388354    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:48:46.399607    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:48:46.399622    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:48:46.412171    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:48:46.412182    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:48:46.430045    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:48:46.430054    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:48:46.441730    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:46.441740    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:46.477965    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:46.477979    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:46.511968    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:48:46.511978    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:48:46.527749    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:48:46.527758    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:48:46.540906    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:46.540920    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:46.564743    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:48:46.564752    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:46.576702    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:46.576712    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:46.581495    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:48:46.581502    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:48:46.595264    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:48:46.595276    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:48:49.111493    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:46.906087    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:54.112146    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:54.112594    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:54.143604    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:48:54.143761    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:54.166298    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:48:54.166403    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:54.181953    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:48:54.182040    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:54.193108    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:48:54.193178    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:54.203759    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:48:54.203847    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:54.215252    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:48:54.215339    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:54.226074    4746 logs.go:276] 0 containers: []
	W0917 10:48:54.226087    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:54.226164    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:54.237021    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:48:54.237035    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:48:54.237042    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:48:54.250708    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:48:54.250718    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:48:54.263029    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:48:54.263040    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:48:54.281000    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:54.281013    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:54.316810    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:48:54.316824    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:48:54.331168    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:48:54.331182    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:48:54.345810    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:48:54.345822    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:48:54.357489    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:48:54.357501    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:48:54.376873    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:48:54.376887    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:48:54.390764    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:54.390775    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:54.414902    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:48:54.414912    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:54.426760    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:54.426773    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:54.461890    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:54.461899    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:51.908326    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:51.908547    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:51.930612    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:51.930735    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:51.946114    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:51.946205    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:51.959043    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:51.959129    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:51.969924    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:51.970013    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:51.980796    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:51.980871    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:51.991469    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:51.991552    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:52.001777    4887 logs.go:276] 0 containers: []
	W0917 10:48:52.001791    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:52.001856    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:52.016664    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:52.016685    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:52.016691    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:52.033957    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:52.033972    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:52.044970    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:52.044985    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:52.061530    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:52.061542    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:52.073498    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:52.073508    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:52.077547    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:52.077559    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:52.091937    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:52.091947    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:52.117787    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:52.117802    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:52.157100    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:52.157148    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:52.172067    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:52.172081    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:52.183558    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:52.183570    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:52.195511    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:52.195521    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:52.231405    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:52.231418    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:52.243611    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:52.243624    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:52.255576    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:52.255588    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:52.273219    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:52.273233    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:52.284938    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:52.284953    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:54.811666    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:56.967767    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:59.813826    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:59.813990    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:59.833650    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:59.833745    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:59.845899    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:59.846012    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:59.856394    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:59.856478    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:59.866792    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:59.866875    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:59.877169    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:59.877248    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:59.887653    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:59.887732    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:59.897929    4887 logs.go:276] 0 containers: []
	W0917 10:48:59.897943    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:59.898003    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:59.908512    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:59.908541    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:59.908547    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:59.936200    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:59.936215    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:59.950970    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:59.950979    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:59.962664    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:59.962674    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:59.975698    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:59.975709    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:59.987981    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:59.987992    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:00.027808    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:00.027826    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:00.039181    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:00.039194    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:00.064214    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:00.064222    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:00.068662    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:00.068671    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:00.082290    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:00.082299    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:00.096801    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:00.096812    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:00.114020    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:00.114029    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:00.125919    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:00.125929    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:00.137187    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:00.137197    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:00.150944    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:00.150959    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:00.162926    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:00.162936    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:01.970006    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:01.970278    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:01.998727    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:01.998872    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:02.018671    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:02.018772    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:02.033269    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:49:02.033357    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:02.045142    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:02.045227    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:02.055879    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:02.055951    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:02.066384    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:02.066456    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:02.080112    4746 logs.go:276] 0 containers: []
	W0917 10:49:02.080125    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:02.080195    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:02.090452    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:02.090475    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:02.090480    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:02.102365    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:02.102376    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:02.116512    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:02.116521    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:02.129252    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:02.129262    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:02.140917    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:02.140928    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:02.161205    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:02.161218    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:02.176472    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:02.176484    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:02.190460    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:02.190473    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:02.215730    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:02.215750    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:02.227632    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:02.227647    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:02.263149    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:02.263158    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:02.267477    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:02.267485    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:02.306884    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:02.306895    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:04.823706    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:02.702359    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:09.825927    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:09.826106    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:09.838775    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:09.838873    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:09.850028    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:09.850115    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:09.860325    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:49:09.860410    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:09.870265    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:09.870349    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:09.880515    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:09.880594    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:09.891185    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:09.891283    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:09.901479    4746 logs.go:276] 0 containers: []
	W0917 10:49:09.901491    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:09.901557    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:09.912258    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:09.912275    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:09.912280    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:09.923650    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:09.923660    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:09.939980    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:09.939989    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:09.956894    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:09.956910    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:09.968266    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:09.968277    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:09.992822    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:09.992832    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:09.997798    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:09.997806    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:10.032409    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:10.032420    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:10.044466    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:10.044477    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:10.056190    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:10.056206    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:10.068266    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:10.068277    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:07.704462    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:07.704649    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:07.717600    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:07.717698    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:07.728313    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:07.728394    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:07.738623    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:07.738707    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:07.748915    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:07.749001    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:07.759606    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:07.759687    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:07.770192    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:07.770276    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:07.786546    4887 logs.go:276] 0 containers: []
	W0917 10:49:07.786558    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:07.786627    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:07.796695    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:07.796712    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:07.796717    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:07.819992    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:07.820000    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:07.837209    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:07.837221    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:07.862412    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:07.862422    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:07.877216    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:07.877226    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:07.891985    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:07.891994    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:07.929832    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:07.929840    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:07.943301    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:07.943312    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:07.955352    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:07.955368    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:07.967507    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:07.967517    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:08.005475    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:08.005485    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:08.020494    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:08.020505    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:08.032115    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:08.032128    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:08.043610    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:08.043620    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:08.055485    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:08.055501    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:08.067507    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:08.067532    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:08.079165    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:08.079178    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:10.586077    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:10.101842    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:10.101855    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:10.122530    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:10.122540    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:12.638065    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:15.588196    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:15.588377    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:15.599685    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:15.599777    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:15.610381    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:15.610472    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:15.626114    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:15.626201    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:15.636833    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:15.636921    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:15.647283    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:15.647365    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:15.659310    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:15.659394    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:15.669539    4887 logs.go:276] 0 containers: []
	W0917 10:49:15.669553    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:15.669625    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:15.680384    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:15.680401    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:15.680406    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:15.705648    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:15.705664    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:15.717322    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:15.717334    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:15.729505    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:15.729527    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:15.754145    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:15.754153    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:15.766168    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:15.766178    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:15.804351    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:15.804379    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:15.808654    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:15.808660    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:15.831710    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:15.831721    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:15.845543    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:15.845554    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:15.856451    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:15.856463    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:15.870628    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:15.870642    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:15.883730    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:15.883741    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:15.894842    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:15.894852    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:15.929579    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:15.929595    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:15.944296    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:15.944307    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:15.958535    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:15.958545    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:17.639562    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:17.639804    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:17.663949    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:17.664049    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:17.677747    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:17.677834    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:17.688904    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:49:17.688978    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:17.703739    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:17.703825    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:17.714374    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:17.714453    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:17.725618    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:17.725701    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:17.736291    4746 logs.go:276] 0 containers: []
	W0917 10:49:17.736303    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:17.736369    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:17.746672    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:17.746685    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:17.746690    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:17.780759    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:17.780772    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:17.795174    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:17.795184    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:17.806585    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:17.806596    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:17.819256    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:17.819268    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:17.837758    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:17.837769    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:17.849112    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:17.849121    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:17.866513    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:17.866524    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:17.901324    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:17.901332    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:17.906483    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:17.906489    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:17.920741    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:17.920752    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:17.935201    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:17.935212    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:17.946520    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:17.946530    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:18.471987    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:20.473211    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:23.473238    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:23.473406    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:23.489646    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:23.489732    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:23.500158    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:23.500240    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:23.510761    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:23.510843    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:23.521698    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:23.521776    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:23.536425    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:23.536504    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:23.546862    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:23.546945    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:23.557657    4887 logs.go:276] 0 containers: []
	W0917 10:49:23.557667    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:23.557728    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:23.568284    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:23.568301    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:23.568306    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:23.582990    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:23.583001    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:23.595797    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:23.595809    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:23.620215    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:23.620226    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:23.634803    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:23.634817    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:23.646400    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:23.646413    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:23.670992    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:23.670999    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:23.710970    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:23.710979    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:23.728657    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:23.728670    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:23.740316    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:23.740326    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:23.751630    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:23.751643    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:23.765302    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:23.765315    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:23.777120    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:23.777135    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:23.781272    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:23.781279    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:23.802401    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:23.802413    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:23.816307    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:23.816318    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:23.834462    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:23.834473    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:25.475392    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:25.475642    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:25.498322    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:25.498452    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:25.514573    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:25.514673    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:25.527435    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:49:25.527525    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:25.539120    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:25.539207    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:25.549632    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:25.549715    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:25.560135    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:25.560221    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:25.569882    4746 logs.go:276] 0 containers: []
	W0917 10:49:25.569896    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:25.569960    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:25.580746    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:25.580765    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:25.580770    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:25.603427    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:25.603437    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:25.619408    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:25.619419    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:25.633807    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:25.633818    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:25.645570    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:25.645582    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:25.671136    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:25.671151    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:25.682718    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:25.682727    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:25.718065    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:25.718076    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:25.722793    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:25.722803    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:25.762919    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:25.762930    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:25.777559    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:25.777574    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:25.788854    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:25.788864    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:25.800412    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:25.800425    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:28.320572    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:26.372262    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:33.322647    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:33.322785    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:33.337319    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:33.337410    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:33.351875    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:33.351953    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:33.363487    4746 logs.go:276] 2 containers: [36a29861218c 66f12769ce86]
	I0917 10:49:33.363576    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:33.373967    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:33.374038    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:33.387801    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:33.387887    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:33.397981    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:33.398052    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:33.407927    4746 logs.go:276] 0 containers: []
	W0917 10:49:33.407940    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:33.408014    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:33.418401    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:33.418415    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:33.418421    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:33.453407    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:33.453419    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:33.467633    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:33.467643    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:33.481771    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:33.481781    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:33.493077    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:33.493087    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:33.504433    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:33.504443    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:33.518944    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:33.518954    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:33.531570    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:33.531581    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:33.542983    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:33.542994    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:33.547669    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:33.547676    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:33.581852    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:33.581862    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:33.599271    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:33.599280    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:33.622477    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:33.622484    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:31.374492    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:31.374890    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:31.402988    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:31.403139    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:31.420871    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:31.420979    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:31.434335    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:31.434422    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:31.445579    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:31.445663    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:31.455839    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:31.455918    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:31.466557    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:31.466630    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:31.478576    4887 logs.go:276] 0 containers: []
	W0917 10:49:31.478590    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:31.478669    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:31.490125    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:31.490150    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:31.490156    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:31.505416    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:31.505426    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:31.517178    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:31.517187    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:31.531990    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:31.531998    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:31.543660    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:31.543670    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:31.558633    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:31.558644    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:31.571004    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:31.571019    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:31.611028    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:31.611038    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:31.615756    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:31.615766    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:31.659683    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:31.659693    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:31.673999    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:31.674011    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:31.685950    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:31.685965    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:31.698471    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:31.698482    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:31.715506    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:31.715518    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:31.738350    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:31.738358    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:31.766296    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:31.766325    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:31.780201    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:31.780212    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:34.300035    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:36.135637    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:39.302179    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:39.302430    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:39.329678    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:39.329787    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:39.342948    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:39.343030    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:39.357186    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:39.357270    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:39.367697    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:39.367781    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:39.378298    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:39.378375    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:39.388919    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:39.388997    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:39.398739    4887 logs.go:276] 0 containers: []
	W0917 10:49:39.398750    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:39.398817    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:39.408976    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:39.408993    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:39.409000    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:39.413238    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:39.413247    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:39.434334    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:39.434344    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:39.445609    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:39.445619    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:39.456625    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:39.456633    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:39.468416    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:39.468428    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:39.485850    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:39.485859    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:39.498137    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:39.498148    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:39.515691    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:39.515705    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:39.527134    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:39.527146    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:39.550504    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:39.550515    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:39.587537    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:39.587546    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:39.633543    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:39.633559    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:39.658757    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:39.658772    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:39.673754    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:39.673764    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:39.685004    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:39.685017    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:39.708478    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:39.708489    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:41.137764    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:41.137911    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:41.156124    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:41.156223    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:41.167484    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:41.167588    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:41.178025    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:49:41.178109    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:41.188458    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:41.188534    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:41.199227    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:41.199312    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:41.210998    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:41.211080    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:41.221611    4746 logs.go:276] 0 containers: []
	W0917 10:49:41.221629    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:41.221693    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:41.232543    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:41.232561    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:41.232566    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:41.268260    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:41.268271    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:41.282352    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:49:41.282363    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:49:41.293448    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:41.293461    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:41.307336    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:41.307351    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:41.332029    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:49:41.332040    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:49:41.343572    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:41.343584    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:41.355092    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:41.355102    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:41.367046    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:41.367061    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:41.379231    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:41.379242    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:41.414430    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:41.414437    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:41.428262    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:41.428271    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:41.432901    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:41.432909    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:41.447650    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:41.447664    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:41.459919    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:41.459928    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:43.979244    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:42.228639    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:48.980925    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:48.981201    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:49.004992    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:49.005129    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:49.021585    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:49.021684    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:49.034899    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:49:49.034986    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:49.045743    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:49.045823    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:49.063262    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:49.063342    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:49.073815    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:49.073898    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:49.083527    4746 logs.go:276] 0 containers: []
	W0917 10:49:49.083541    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:49.083611    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:49.093853    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:49.093872    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:49.093880    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:49.109026    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:49:49.109040    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:49:49.120700    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:49.120714    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:49.132533    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:49.132543    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:49.144075    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:49.144085    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:49.158085    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:49.158100    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:49.169825    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:49.169834    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:49.174498    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:49.174504    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:49.211769    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:49.211781    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:49.226557    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:49.226567    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:49.247338    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:49.247348    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:49.258732    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:49:49.258744    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:49:49.276379    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:49.276388    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:49.288238    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:49.288251    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:49.324058    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:49.324070    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:47.230848    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:47.231014    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:47.245218    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:47.245304    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:47.256764    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:47.256847    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:47.266827    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:47.266904    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:47.278633    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:47.278714    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:47.288655    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:47.288733    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:47.299788    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:47.299866    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:47.310563    4887 logs.go:276] 0 containers: []
	W0917 10:49:47.310575    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:47.310644    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:47.321103    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:47.321122    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:47.321127    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:47.335429    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:47.335439    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:47.353006    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:47.353015    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:47.364861    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:47.364873    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:47.403346    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:47.403357    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:47.438303    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:47.438317    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:47.442508    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:47.442514    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:47.453802    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:47.453813    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:47.465583    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:47.465595    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:47.479205    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:47.479216    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:47.490300    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:47.490313    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:47.514845    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:47.514855    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:47.529795    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:47.529806    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:47.541638    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:47.541650    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:47.553668    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:47.553679    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:47.577854    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:47.577862    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:47.591793    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:47.591804    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:50.107916    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:51.849918    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:55.109958    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:55.110108    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:55.121101    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:55.121194    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:55.131376    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:55.131466    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:55.141774    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:55.141853    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:55.152324    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:55.152407    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:55.167061    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:55.167147    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:55.177734    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:55.177808    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:55.187448    4887 logs.go:276] 0 containers: []
	W0917 10:49:55.187465    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:55.187534    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:55.198252    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:55.198268    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:55.198273    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:55.209603    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:55.209615    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:55.226925    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:55.226935    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:55.250558    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:55.250566    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:55.254883    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:55.254889    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:55.290348    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:55.290363    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:55.305393    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:55.305405    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:55.319096    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:55.319113    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:55.330751    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:55.330767    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:55.369856    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:55.369866    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:55.381446    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:55.381457    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:55.397346    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:55.397358    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:55.410630    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:55.410640    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:55.436190    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:55.436202    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:55.450245    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:55.450257    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:55.464836    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:55.464848    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:55.483458    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:55.483469    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:56.852007    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:56.852106    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:56.863389    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:49:56.863473    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:56.874236    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:49:56.874319    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:56.886100    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:49:56.886187    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:56.898406    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:49:56.898486    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:56.909096    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:49:56.909180    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:56.920378    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:49:56.920460    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:56.930933    4746 logs.go:276] 0 containers: []
	W0917 10:49:56.930943    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:56.931005    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:56.941306    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:49:56.941325    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:49:56.941330    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:49:56.959272    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:49:56.959287    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:49:56.971641    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:49:56.971671    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:49:56.983030    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:49:56.983042    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:49:57.001046    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:49:57.001058    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:49:57.012838    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:49:57.012851    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:49:57.032181    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:49:57.032192    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:49:57.046529    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:57.046543    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:57.081562    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:57.081571    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:57.086340    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:49:57.086346    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:49:57.101773    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:49:57.101785    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:49:57.114175    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:57.114186    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:57.148715    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:49:57.148726    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:49:57.160409    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:57.160422    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:57.184303    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:49:57.184309    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:59.698257    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:57.998388    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:04.699239    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:04.699455    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:04.720624    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:04.720751    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:04.736375    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:04.736466    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:04.749098    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:04.749191    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:04.760954    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:04.761029    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:04.776203    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:04.776278    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:04.786707    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:04.786776    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:04.796823    4746 logs.go:276] 0 containers: []
	W0917 10:50:04.796836    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:04.796908    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:04.808684    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:04.808703    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:04.808708    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:04.820498    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:04.820510    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:04.854360    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:04.854367    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:04.897071    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:04.897081    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:04.912178    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:04.912189    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:04.929803    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:04.929816    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:04.941464    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:04.941477    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:04.966693    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:04.966701    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:04.979650    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:04.979664    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:04.991137    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:04.991150    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:05.017085    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:05.017098    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:05.031296    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:05.031310    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:05.042813    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:05.042827    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:05.054259    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:05.054272    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:05.066170    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:05.066181    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:03.000612    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:03.000798    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:03.015538    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:03.015627    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:03.028088    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:03.028160    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:03.041303    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:03.041386    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:03.051759    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:03.051842    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:03.061814    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:03.061891    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:03.076728    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:03.076809    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:03.087365    4887 logs.go:276] 0 containers: []
	W0917 10:50:03.087377    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:03.087446    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:03.101290    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:03.101312    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:03.101318    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:03.113689    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:03.113702    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:03.152726    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:03.152738    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:03.167439    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:03.167456    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:03.182041    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:03.182052    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:03.196095    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:03.196111    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:03.220480    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:03.220488    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:03.224507    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:03.224519    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:03.249515    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:03.249526    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:03.286756    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:03.286771    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:03.300750    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:03.300763    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:03.312926    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:03.312936    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:03.324240    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:03.324253    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:03.338761    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:03.338772    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:03.350793    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:03.350804    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:03.368291    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:03.368301    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:03.380422    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:03.380433    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:05.894428    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:07.572580    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:10.896526    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:10.896707    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:10.915242    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:10.915350    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:10.929207    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:10.929300    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:10.940692    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:10.940775    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:10.951284    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:10.951369    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:10.965693    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:10.965778    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:10.976131    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:10.976205    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:10.986819    4887 logs.go:276] 0 containers: []
	W0917 10:50:10.986831    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:10.986898    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:10.997449    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:10.997467    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:10.997473    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:11.012185    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:11.012200    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:11.028141    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:11.028152    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:11.039776    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:11.039787    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:11.051234    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:11.051247    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:12.572970    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:12.573201    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:12.591030    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:12.591136    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:12.603528    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:12.603620    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:12.623249    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:12.623337    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:12.633712    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:12.633789    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:12.644029    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:12.644114    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:12.654713    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:12.654786    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:12.664976    4746 logs.go:276] 0 containers: []
	W0917 10:50:12.664988    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:12.665058    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:12.675225    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:12.675242    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:12.675247    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:12.689248    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:12.689263    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:12.708437    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:12.708451    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:12.725724    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:12.725736    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:12.740916    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:12.740927    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:12.757914    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:12.757925    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:12.770688    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:12.770699    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:12.781830    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:12.781840    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:12.827469    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:12.827483    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:12.839500    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:12.839510    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:12.851930    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:12.851945    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:12.869522    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:12.869540    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:12.904807    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:12.904816    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:12.909292    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:12.909299    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:12.921155    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:12.921168    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:11.090264    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:11.090272    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:11.112980    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:11.112992    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:11.127711    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:11.127726    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:11.141025    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:11.141040    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:11.154341    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:11.154352    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:11.179545    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:11.179557    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:11.190785    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:11.190793    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:11.201889    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:11.201900    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:11.237128    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:11.237140    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:11.255085    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:11.255100    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:11.278761    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:11.278769    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:11.290976    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:11.290986    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:13.797657    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:15.445182    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:18.799924    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:18.800199    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:18.825048    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:18.825184    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:18.841122    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:18.841221    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:18.854397    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:18.854481    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:18.865833    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:18.865904    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:18.876611    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:18.876696    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:18.892435    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:18.892520    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:18.906670    4887 logs.go:276] 0 containers: []
	W0917 10:50:18.906685    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:18.906756    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:18.925723    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:18.925742    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:18.925748    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:18.947473    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:18.947485    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:18.959380    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:18.959391    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:18.971803    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:18.971813    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:18.995328    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:18.995336    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:19.032823    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:19.032833    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:19.046591    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:19.046603    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:19.071307    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:19.071319    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:19.085801    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:19.085810    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:19.103946    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:19.103957    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:19.116185    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:19.116196    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:19.127946    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:19.127958    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:19.140357    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:19.140368    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:19.152370    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:19.152382    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:19.188934    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:19.188943    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:19.192914    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:19.192921    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:19.207614    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:19.207627    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:20.445524    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:20.445703    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:20.461101    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:20.461198    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:20.473666    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:20.473750    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:20.487330    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:20.487418    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:20.497613    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:20.497685    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:20.508385    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:20.508465    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:20.523459    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:20.523534    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:20.534195    4746 logs.go:276] 0 containers: []
	W0917 10:50:20.534206    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:20.534274    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:20.544545    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:20.544564    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:20.544571    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:20.558389    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:20.558402    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:20.579350    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:20.579361    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:20.590700    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:20.590713    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:20.608389    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:20.608398    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:20.620015    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:20.620030    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:20.645526    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:20.645533    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:20.657262    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:20.657272    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:20.691734    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:20.691749    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:20.703686    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:20.703697    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:20.716359    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:20.716369    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:20.728145    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:20.728154    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:20.745692    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:20.745703    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:20.781554    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:20.781569    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:20.786126    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:20.786133    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:23.303250    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:21.724835    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:28.305380    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:28.305575    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:28.320695    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:28.320792    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:28.336527    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:28.336613    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:28.347185    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:28.347266    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:28.361328    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:28.361409    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:28.372030    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:28.372116    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:28.382982    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:28.383060    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:28.393478    4746 logs.go:276] 0 containers: []
	W0917 10:50:28.393492    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:28.393560    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:28.403991    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:28.404009    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:28.404017    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:28.418328    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:28.418338    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:28.430405    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:28.430422    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:28.441805    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:28.441816    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:28.456076    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:28.456087    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:28.471165    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:28.471181    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:28.475431    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:28.475440    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:28.489646    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:28.489657    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:28.514884    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:28.514892    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:28.548894    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:28.548902    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:28.592969    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:28.592978    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:28.610965    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:28.610975    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:28.623555    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:28.623566    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:28.635376    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:28.635386    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:28.647305    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:28.647314    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:26.727052    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:26.727394    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:26.747957    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:26.748067    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:26.761769    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:26.761857    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:26.773673    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:26.773753    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:26.786199    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:26.786288    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:26.796901    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:26.796967    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:26.807645    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:26.807731    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:26.827908    4887 logs.go:276] 0 containers: []
	W0917 10:50:26.827920    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:26.827993    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:26.838462    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:26.838486    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:26.838491    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:26.849614    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:26.849625    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:26.861264    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:26.861273    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:26.900097    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:26.900109    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:26.935115    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:26.935129    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:26.952957    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:26.952973    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:26.978967    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:26.978990    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:26.996555    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:26.996576    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:27.011239    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:27.011254    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:27.015826    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:27.015835    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:27.033696    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:27.033707    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:27.051895    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:27.051906    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:27.063644    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:27.063655    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:27.074671    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:27.074683    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:27.086758    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:27.086774    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:27.101380    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:27.101393    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:27.125259    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:27.125268    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:29.640811    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:31.161353    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:34.642093    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:34.642306    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:34.659325    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:34.659428    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:34.672953    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:34.673047    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:34.684685    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:34.684771    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:34.696040    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:34.696120    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:34.706614    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:34.706695    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:34.717223    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:34.717296    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:34.728784    4887 logs.go:276] 0 containers: []
	W0917 10:50:34.728796    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:34.728871    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:34.739408    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:34.739426    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:34.739432    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:34.756304    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:34.756320    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:34.781042    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:34.781053    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:34.792084    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:34.792095    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:34.804227    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:34.804243    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:34.815793    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:34.815807    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:34.819801    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:34.819810    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:34.854570    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:34.854584    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:34.866655    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:34.866670    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:34.907830    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:34.907843    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:34.921969    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:34.921979    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:34.936427    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:34.936436    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:34.947614    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:34.947627    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:34.962370    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:34.962380    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:34.974420    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:34.974431    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:34.986146    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:34.986157    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:35.009187    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:35.009204    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:36.163439    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:36.163537    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:36.174659    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:36.174750    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:36.184866    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:36.184940    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:36.195169    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:36.195256    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:36.206390    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:36.206469    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:36.216999    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:36.217083    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:36.227804    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:36.227889    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:36.238203    4746 logs.go:276] 0 containers: []
	W0917 10:50:36.238212    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:36.238280    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:36.250202    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:36.250220    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:36.250225    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:36.285988    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:36.285999    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:36.301937    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:36.301952    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:36.314243    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:36.314254    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:36.332365    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:36.332377    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:36.347669    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:36.347684    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:36.359896    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:36.359910    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:36.395252    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:36.395260    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:36.399814    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:36.399820    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:36.413991    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:36.414006    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:36.426097    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:36.426107    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:36.438265    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:36.438274    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:36.450117    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:36.450129    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:36.462269    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:36.462284    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:36.477126    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:36.477136    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:39.003818    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:37.532607    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:44.006327    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:44.006574    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:44.025103    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:44.025214    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:44.039794    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:44.039889    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:44.052620    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:44.052709    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:44.063617    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:44.063697    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:44.074711    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:44.074794    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:44.084892    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:44.084971    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:44.095205    4746 logs.go:276] 0 containers: []
	W0917 10:50:44.095215    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:44.095280    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:44.107441    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:44.107458    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:44.107464    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:44.122217    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:44.122234    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:44.135699    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:44.135711    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:44.149044    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:44.149055    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:44.164293    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:44.164308    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:44.176127    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:44.176138    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:44.187982    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:44.187992    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:44.213866    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:44.213874    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:44.248580    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:44.248591    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:44.254075    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:44.254088    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:44.288832    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:44.288843    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:44.302809    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:44.302821    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:44.317137    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:44.317147    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:44.333222    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:44.333235    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:44.350507    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:44.350517    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:42.534821    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:42.535244    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:42.573455    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:42.573587    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:42.588894    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:42.588984    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:42.601742    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:42.601832    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:42.613370    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:42.613456    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:42.627920    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:42.628003    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:42.638132    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:42.638207    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:42.648810    4887 logs.go:276] 0 containers: []
	W0917 10:50:42.648829    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:42.648924    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:42.660164    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:42.660185    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:42.660191    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:42.671960    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:42.671970    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:42.697603    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:42.697617    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:42.715675    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:42.715690    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:42.733853    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:42.733867    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:42.746233    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:42.746242    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:42.758232    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:42.758241    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:42.769993    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:42.770002    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:42.781949    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:42.781958    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:42.800020    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:42.800034    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:42.838120    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:42.838129    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:42.842703    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:42.842709    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:42.877029    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:42.877040    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:42.890880    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:42.890888    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:42.910704    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:42.910716    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:42.922824    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:42.922836    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:42.934667    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:42.934678    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:45.461662    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:46.864345    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:50.462521    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:50.462756    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:50.476593    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:50.476709    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:50.487908    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:50.487986    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:50.498526    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:50.498603    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:50.512771    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:50.512860    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:50.522904    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:50.522990    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:50.533722    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:50.533806    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:50.544060    4887 logs.go:276] 0 containers: []
	W0917 10:50:50.544071    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:50.544138    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:50.554597    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:50.554613    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:50.554619    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:50.566965    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:50.566981    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:50.571133    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:50.571141    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:50.582720    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:50.582730    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:50.595055    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:50.595066    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:50.606650    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:50.606663    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:50.628400    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:50.628408    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:50.639633    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:50.639647    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:50.678494    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:50.678503    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:50.699743    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:50.699758    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:50.713221    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:50.713237    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:50.727637    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:50.727646    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:50.741900    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:50.741915    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:50.777653    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:50.777663    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:50.802906    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:50.802918    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:50.814334    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:50.814347    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:50.826351    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:50.826362    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:51.866816    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:51.867036    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:51.884706    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:51.884808    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:51.897562    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:51.897650    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:51.908942    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:51.909024    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:51.920437    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:51.920522    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:51.930695    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:51.930769    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:51.941813    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:51.941892    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:51.952121    4746 logs.go:276] 0 containers: []
	W0917 10:50:51.952134    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:51.952202    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:51.962737    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:51.962756    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:50:51.962761    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:50:51.977819    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:51.977831    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:51.989759    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:51.989769    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:52.014648    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:50:52.014655    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:52.026241    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:52.026252    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:52.030746    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:52.030755    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:50:52.044307    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:52.044318    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:52.059657    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:50:52.059668    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:50:52.071164    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:52.071174    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:52.105522    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:52.105540    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:52.118237    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:50:52.118247    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:50:52.135666    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:52.135677    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:52.170172    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:52.170190    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:52.189518    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:50:52.189529    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:50:52.203934    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:52.203944    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:54.727372    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:53.344691    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:58.346850    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:58.346927    4887 kubeadm.go:597] duration metric: took 4m3.576523208s to restartPrimaryControlPlane
	W0917 10:50:58.346996    4887 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 10:50:58.347025    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0917 10:50:59.354452    4887 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.007447584s)
	I0917 10:50:59.354535    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:50:59.359659    4887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 10:50:59.362365    4887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 10:50:59.365178    4887 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 10:50:59.365184    4887 kubeadm.go:157] found existing configuration files:
	
	I0917 10:50:59.365209    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/admin.conf
	I0917 10:50:59.367670    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 10:50:59.367694    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 10:50:59.370247    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/kubelet.conf
	I0917 10:50:59.373188    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 10:50:59.373210    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 10:50:59.376237    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/controller-manager.conf
	I0917 10:50:59.378692    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 10:50:59.378718    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 10:50:59.381657    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/scheduler.conf
	I0917 10:50:59.384729    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 10:50:59.384755    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 10:50:59.387364    4887 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 10:50:59.405190    4887 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0917 10:50:59.405228    4887 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 10:50:59.454062    4887 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 10:50:59.454114    4887 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 10:50:59.454156    4887 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 10:50:59.504108    4887 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 10:50:59.509332    4887 out.go:235]   - Generating certificates and keys ...
	I0917 10:50:59.509368    4887 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 10:50:59.509400    4887 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 10:50:59.509467    4887 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 10:50:59.509545    4887 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 10:50:59.509611    4887 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 10:50:59.509669    4887 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 10:50:59.509751    4887 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 10:50:59.509798    4887 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 10:50:59.509852    4887 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 10:50:59.509908    4887 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 10:50:59.509974    4887 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 10:50:59.510020    4887 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 10:50:59.592095    4887 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 10:50:59.669100    4887 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 10:50:59.762830    4887 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 10:50:59.795626    4887 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 10:50:59.829048    4887 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 10:50:59.829422    4887 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 10:50:59.829451    4887 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 10:50:59.916953    4887 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 10:50:59.729469    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:59.729578    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:59.741139    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:50:59.741230    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:59.751824    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:50:59.751907    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:59.764120    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:50:59.764193    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:59.775108    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:50:59.775194    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:59.785955    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:50:59.786039    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:59.798331    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:50:59.798412    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:59.809475    4746 logs.go:276] 0 containers: []
	W0917 10:50:59.809488    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:59.809561    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:59.821266    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:50:59.821284    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:59.821290    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:59.858396    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:50:59.858409    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:50:59.870067    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:50:59.870081    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:50:59.888515    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:59.888526    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:59.893183    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:59.893191    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:59.930666    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:50:59.930678    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:50:59.946763    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:50:59.946773    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:50:59.958347    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:50:59.958363    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:50:59.970193    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:59.970203    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:59.995044    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:50:59.995051    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:51:00.009829    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:51:00.009843    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:51:00.023489    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:51:00.023503    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:51:00.038386    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:51:00.038396    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:51:00.050153    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:51:00.050163    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:51:00.062308    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:51:00.062318    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:59.920812    4887 out.go:235]   - Booting up control plane ...
	I0917 10:50:59.920873    4887 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 10:50:59.923044    4887 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 10:50:59.923139    4887 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 10:50:59.923293    4887 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 10:50:59.923390    4887 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 10:51:02.576499    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:04.421387    4887 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501986 seconds
	I0917 10:51:04.421446    4887 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 10:51:04.424750    4887 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 10:51:04.939312    4887 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 10:51:04.939571    4887 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-293000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 10:51:05.442792    4887 kubeadm.go:310] [bootstrap-token] Using token: 4qi2qg.9x5j38z4v8y3lhdh
	I0917 10:51:05.448491    4887 out.go:235]   - Configuring RBAC rules ...
	I0917 10:51:05.448558    4887 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 10:51:05.448601    4887 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 10:51:05.454239    4887 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 10:51:05.455100    4887 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 10:51:05.456091    4887 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 10:51:05.458259    4887 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 10:51:05.461675    4887 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 10:51:05.628769    4887 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 10:51:05.849224    4887 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 10:51:05.849967    4887 kubeadm.go:310] 
	I0917 10:51:05.850000    4887 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 10:51:05.850003    4887 kubeadm.go:310] 
	I0917 10:51:05.850041    4887 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 10:51:05.850044    4887 kubeadm.go:310] 
	I0917 10:51:05.850058    4887 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 10:51:05.850100    4887 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 10:51:05.850141    4887 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 10:51:05.850146    4887 kubeadm.go:310] 
	I0917 10:51:05.850173    4887 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 10:51:05.850177    4887 kubeadm.go:310] 
	I0917 10:51:05.850211    4887 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 10:51:05.850216    4887 kubeadm.go:310] 
	I0917 10:51:05.850242    4887 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 10:51:05.850296    4887 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 10:51:05.850359    4887 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 10:51:05.850362    4887 kubeadm.go:310] 
	I0917 10:51:05.850409    4887 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 10:51:05.850450    4887 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 10:51:05.850453    4887 kubeadm.go:310] 
	I0917 10:51:05.850497    4887 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4qi2qg.9x5j38z4v8y3lhdh \
	I0917 10:51:05.850558    4887 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:36041a92e029834f33dc421547a4417b75c39ebfd82ce914924ecffa9817b69d \
	I0917 10:51:05.850570    4887 kubeadm.go:310] 	--control-plane 
	I0917 10:51:05.850573    4887 kubeadm.go:310] 
	I0917 10:51:05.850614    4887 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 10:51:05.850617    4887 kubeadm.go:310] 
	I0917 10:51:05.850656    4887 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4qi2qg.9x5j38z4v8y3lhdh \
	I0917 10:51:05.850712    4887 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:36041a92e029834f33dc421547a4417b75c39ebfd82ce914924ecffa9817b69d 
	I0917 10:51:05.850842    4887 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 10:51:05.850857    4887 cni.go:84] Creating CNI manager for ""
	I0917 10:51:05.850868    4887 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:51:05.855074    4887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 10:51:05.863962    4887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 10:51:05.867112    4887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 10:51:05.872351    4887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 10:51:05.872410    4887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 10:51:05.872416    4887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-293000 minikube.k8s.io/updated_at=2024_09_17T10_51_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=stopped-upgrade-293000 minikube.k8s.io/primary=true
	I0917 10:51:05.917297    4887 ops.go:34] apiserver oom_adj: -16
	I0917 10:51:05.917313    4887 kubeadm.go:1113] duration metric: took 44.955416ms to wait for elevateKubeSystemPrivileges
	I0917 10:51:05.917322    4887 kubeadm.go:394] duration metric: took 4m11.16067075s to StartCluster
	I0917 10:51:05.917332    4887 settings.go:142] acquiring lock: {Name:mk01dda79792b7eaa96d8ee72bfae59b39d5fab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:51:05.917420    4887 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:51:05.917819    4887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/kubeconfig: {Name:mk31f3a4e5ba5b55f1c245ae17bd3947ee606141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:51:05.918021    4887 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:51:05.918060    4887 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 10:51:05.918103    4887 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-293000"
	I0917 10:51:05.918111    4887 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-293000"
	W0917 10:51:05.918115    4887 addons.go:243] addon storage-provisioner should already be in state true
	I0917 10:51:05.918128    4887 host.go:66] Checking if "stopped-upgrade-293000" exists ...
	I0917 10:51:05.918153    4887 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-293000"
	I0917 10:51:05.918192    4887 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:51:05.918201    4887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-293000"
	I0917 10:51:05.919337    4887 kapi.go:59] client config for stopped-upgrade-293000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10421d800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:51:05.919458    4887 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-293000"
	W0917 10:51:05.919463    4887 addons.go:243] addon default-storageclass should already be in state true
	I0917 10:51:05.919469    4887 host.go:66] Checking if "stopped-upgrade-293000" exists ...
	I0917 10:51:05.921981    4887 out.go:177] * Verifying Kubernetes components...
	I0917 10:51:05.922276    4887 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 10:51:05.923265    4887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 10:51:05.923281    4887 sshutil.go:53] new ssh client: &{IP:localhost Port:50461 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/id_rsa Username:docker}
	I0917 10:51:05.925935    4887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:51:05.929986    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:51:05.933973    4887 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 10:51:05.933980    4887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 10:51:05.933986    4887 sshutil.go:53] new ssh client: &{IP:localhost Port:50461 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/id_rsa Username:docker}
	I0917 10:51:06.022985    4887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:51:06.028369    4887 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:51:06.028414    4887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:51:06.032170    4887 api_server.go:72] duration metric: took 114.141166ms to wait for apiserver process to appear ...
	I0917 10:51:06.032178    4887 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:51:06.032184    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:06.053983    4887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 10:51:07.578603    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:07.578790    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:51:07.593555    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:51:07.593641    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:51:07.606361    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:51:07.606446    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:51:07.618156    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:51:07.618245    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:51:07.628393    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:51:07.628475    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:51:07.638366    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:51:07.638447    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:51:07.648988    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:51:07.649073    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:51:07.659270    4746 logs.go:276] 0 containers: []
	W0917 10:51:07.659283    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:51:07.659353    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:51:07.670339    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:51:07.670356    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:51:07.670362    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:51:07.704744    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:51:07.704759    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:51:07.709444    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:51:07.709453    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:51:07.723978    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:51:07.723994    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:51:07.735660    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:51:07.735672    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:51:07.748543    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:51:07.748559    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:51:07.766581    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:51:07.766595    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:51:07.784080    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:51:07.784090    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:51:07.798691    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:51:07.798702    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:51:07.811022    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:51:07.811037    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:51:07.834520    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:51:07.834529    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:51:07.869586    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:51:07.869600    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:51:07.881343    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:51:07.881354    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:51:07.893282    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:51:07.893294    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:51:07.905122    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:51:07.905132    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:51:06.078400    4887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 10:51:06.426396    4887 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 10:51:06.426408    4887 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 10:51:11.034117    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:11.034154    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:10.419206    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:16.034648    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:16.034698    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:15.421337    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:15.421461    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:51:15.432418    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:51:15.432488    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:51:15.442605    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:51:15.442690    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:51:15.453007    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:51:15.453088    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:51:15.463653    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:51:15.463731    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:51:15.474375    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:51:15.474460    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:51:15.484807    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:51:15.484887    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:51:15.495449    4746 logs.go:276] 0 containers: []
	W0917 10:51:15.495459    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:51:15.495527    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:51:15.505761    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:51:15.505779    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:51:15.505786    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:51:15.529927    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:51:15.529938    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:51:15.541159    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:51:15.541169    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:51:15.555536    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:51:15.555551    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:51:15.567461    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:51:15.567473    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:51:15.580886    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:51:15.580898    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:51:15.592801    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:51:15.592813    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:51:15.607870    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:51:15.607881    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:51:15.619664    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:51:15.619679    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:51:15.631580    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:51:15.631595    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:51:15.642987    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:51:15.643002    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:51:15.660610    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:51:15.660620    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:51:15.672595    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:51:15.672609    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:51:15.706651    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:51:15.706665    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:51:15.711724    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:51:15.711740    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:51:18.248610    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:21.034987    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:21.035022    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:23.250610    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:23.250809    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:51:23.262050    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:51:23.262128    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:51:23.272819    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:51:23.272898    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:51:23.283752    4746 logs.go:276] 4 containers: [f1d1743ca406 684381bbeb3a 36a29861218c 66f12769ce86]
	I0917 10:51:23.283835    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:51:23.294541    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:51:23.294615    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:51:23.310859    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:51:23.310934    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:51:23.321239    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:51:23.321318    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:51:23.337623    4746 logs.go:276] 0 containers: []
	W0917 10:51:23.337638    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:51:23.337712    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:51:23.348954    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:51:23.348970    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:51:23.348975    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:51:23.363635    4746 logs.go:123] Gathering logs for coredns [36a29861218c] ...
	I0917 10:51:23.363650    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a29861218c"
	I0917 10:51:23.375683    4746 logs.go:123] Gathering logs for coredns [66f12769ce86] ...
	I0917 10:51:23.375694    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f12769ce86"
	I0917 10:51:23.388229    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:51:23.388241    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:51:23.402831    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:51:23.402841    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:51:23.414669    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:51:23.414679    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:51:23.448165    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:51:23.448174    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:51:23.453070    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:51:23.453079    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:51:23.486735    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:51:23.486749    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:51:23.498487    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:51:23.498509    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:51:23.513663    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:51:23.513675    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:51:23.526210    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:51:23.526224    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:51:23.543253    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:51:23.543266    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:51:23.554797    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:51:23.554809    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:51:23.566792    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:51:23.566802    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:51:26.035504    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:26.035551    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:26.091322    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:31.036304    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:31.036354    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:31.093434    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:31.093529    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:51:31.104602    4746 logs.go:276] 1 containers: [f177a5fd6d0a]
	I0917 10:51:31.104694    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:51:31.115091    4746 logs.go:276] 1 containers: [00cb5784efec]
	I0917 10:51:31.115177    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:51:31.125844    4746 logs.go:276] 4 containers: [72019332a1d8 d3af68a4aad3 f1d1743ca406 684381bbeb3a]
	I0917 10:51:31.125931    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:51:31.136324    4746 logs.go:276] 1 containers: [8c9778b91bff]
	I0917 10:51:31.136410    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:51:31.149113    4746 logs.go:276] 1 containers: [0a180d04355d]
	I0917 10:51:31.149197    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:51:31.160973    4746 logs.go:276] 1 containers: [380aa7bba23d]
	I0917 10:51:31.161051    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:51:31.171134    4746 logs.go:276] 0 containers: []
	W0917 10:51:31.171144    4746 logs.go:278] No container was found matching "kindnet"
	I0917 10:51:31.171212    4746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:51:31.181607    4746 logs.go:276] 1 containers: [6dbc9510eace]
	I0917 10:51:31.181626    4746 logs.go:123] Gathering logs for dmesg ...
	I0917 10:51:31.181632    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:51:31.186087    4746 logs.go:123] Gathering logs for etcd [00cb5784efec] ...
	I0917 10:51:31.186096    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cb5784efec"
	I0917 10:51:31.200437    4746 logs.go:123] Gathering logs for coredns [d3af68a4aad3] ...
	I0917 10:51:31.200445    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3af68a4aad3"
	I0917 10:51:31.212772    4746 logs.go:123] Gathering logs for coredns [f1d1743ca406] ...
	I0917 10:51:31.212789    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1d1743ca406"
	I0917 10:51:31.226382    4746 logs.go:123] Gathering logs for storage-provisioner [6dbc9510eace] ...
	I0917 10:51:31.226394    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbc9510eace"
	I0917 10:51:31.237932    4746 logs.go:123] Gathering logs for kubelet ...
	I0917 10:51:31.237941    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:51:31.272639    4746 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:51:31.272653    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:51:31.316165    4746 logs.go:123] Gathering logs for kube-proxy [0a180d04355d] ...
	I0917 10:51:31.316178    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a180d04355d"
	I0917 10:51:31.328180    4746 logs.go:123] Gathering logs for kube-apiserver [f177a5fd6d0a] ...
	I0917 10:51:31.328194    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f177a5fd6d0a"
	I0917 10:51:31.342053    4746 logs.go:123] Gathering logs for coredns [72019332a1d8] ...
	I0917 10:51:31.342067    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72019332a1d8"
	I0917 10:51:31.353010    4746 logs.go:123] Gathering logs for kube-scheduler [8c9778b91bff] ...
	I0917 10:51:31.353026    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c9778b91bff"
	I0917 10:51:31.367008    4746 logs.go:123] Gathering logs for Docker ...
	I0917 10:51:31.367018    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:51:31.390153    4746 logs.go:123] Gathering logs for coredns [684381bbeb3a] ...
	I0917 10:51:31.390162    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 684381bbeb3a"
	I0917 10:51:31.401780    4746 logs.go:123] Gathering logs for kube-controller-manager [380aa7bba23d] ...
	I0917 10:51:31.401792    4746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 380aa7bba23d"
	I0917 10:51:31.420631    4746 logs.go:123] Gathering logs for container status ...
	I0917 10:51:31.420642    4746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:51:33.933532    4746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:36.037228    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:36.037279    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0917 10:51:36.427725    4887 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0917 10:51:36.431899    4887 out.go:177] * Enabled addons: storage-provisioner
	I0917 10:51:38.935895    4746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:38.940776    4746 out.go:201] 
	W0917 10:51:38.944974    4746 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0917 10:51:38.944991    4746 out.go:270] * 
	W0917 10:51:38.946280    4746 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:51:38.956887    4746 out.go:201] 
	I0917 10:51:36.439808    4887 addons.go:510] duration metric: took 30.522691042s for enable addons: enabled=[storage-provisioner]
	I0917 10:51:41.038510    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:41.038558    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:46.040172    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:46.040252    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:51.042598    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:51.042655    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-09-17 17:42:49 UTC, ends at Tue 2024-09-17 17:51:55 UTC. --
	Sep 17 17:51:34 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:34Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 17 17:51:39 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:39Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 17 17:51:39 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:39Z" level=error msg="ContainerStats resp: {0x400066a580 linux}"
	Sep 17 17:51:39 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:39Z" level=error msg="ContainerStats resp: {0x400066ae40 linux}"
	Sep 17 17:51:40 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:40Z" level=error msg="ContainerStats resp: {0x40001fb880 linux}"
	Sep 17 17:51:41 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:41Z" level=error msg="ContainerStats resp: {0x40001fbf00 linux}"
	Sep 17 17:51:41 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:41Z" level=error msg="ContainerStats resp: {0x4000864a40 linux}"
	Sep 17 17:51:41 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:41Z" level=error msg="ContainerStats resp: {0x4000864b80 linux}"
	Sep 17 17:51:41 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:41Z" level=error msg="ContainerStats resp: {0x4000865380 linux}"
	Sep 17 17:51:41 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:41Z" level=error msg="ContainerStats resp: {0x40007c6e40 linux}"
	Sep 17 17:51:41 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:41Z" level=error msg="ContainerStats resp: {0x40007c7400 linux}"
	Sep 17 17:51:41 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:41Z" level=error msg="ContainerStats resp: {0x40007c7800 linux}"
	Sep 17 17:51:44 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:44Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 17 17:51:49 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:49Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 17 17:51:51 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:51Z" level=error msg="ContainerStats resp: {0x400066bb80 linux}"
	Sep 17 17:51:51 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:51Z" level=error msg="ContainerStats resp: {0x40001fb040 linux}"
	Sep 17 17:51:52 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:52Z" level=error msg="ContainerStats resp: {0x4000876280 linux}"
	Sep 17 17:51:53 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:53Z" level=error msg="ContainerStats resp: {0x4000877280 linux}"
	Sep 17 17:51:53 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:53Z" level=error msg="ContainerStats resp: {0x40004e5540 linux}"
	Sep 17 17:51:53 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:53Z" level=error msg="ContainerStats resp: {0x40004e59c0 linux}"
	Sep 17 17:51:53 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:53Z" level=error msg="ContainerStats resp: {0x40007c61c0 linux}"
	Sep 17 17:51:53 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:53Z" level=error msg="ContainerStats resp: {0x40007c6780 linux}"
	Sep 17 17:51:53 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:53Z" level=error msg="ContainerStats resp: {0x4000864f80 linux}"
	Sep 17 17:51:53 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:53Z" level=error msg="ContainerStats resp: {0x4000865140 linux}"
	Sep 17 17:51:54 running-upgrade-161000 cri-dockerd[2713]: time="2024-09-17T17:51:54Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	72019332a1d89       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   374a447bde03d
	d3af68a4aad3f       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   168a209c09148
	f1d1743ca406a       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   168a209c09148
	684381bbeb3aa       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   374a447bde03d
	6dbc9510eacef       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   c2bece588f212
	0a180d04355d2       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   b47686cc8ffe1
	8c9778b91bffe       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   c6b17ac0df22b
	380aa7bba23d5       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   71ca4979008b5
	f177a5fd6d0ae       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   8c58dfb70eac5
	00cb5784efec8       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   852a9050bb427
	
	
	==> coredns [684381bbeb3a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 730843338228102095.4023652212331651519. HINFO: read udp 10.244.0.3:58009->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 730843338228102095.4023652212331651519. HINFO: read udp 10.244.0.3:60964->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 730843338228102095.4023652212331651519. HINFO: read udp 10.244.0.3:38918->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 730843338228102095.4023652212331651519. HINFO: read udp 10.244.0.3:45226->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 730843338228102095.4023652212331651519. HINFO: read udp 10.244.0.3:38681->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 730843338228102095.4023652212331651519. HINFO: read udp 10.244.0.3:44900->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 730843338228102095.4023652212331651519. HINFO: read udp 10.244.0.3:56623->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 730843338228102095.4023652212331651519. HINFO: read udp 10.244.0.3:40161->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 730843338228102095.4023652212331651519. HINFO: read udp 10.244.0.3:54391->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 730843338228102095.4023652212331651519. HINFO: read udp 10.244.0.3:35282->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [72019332a1d8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 386705444372259850.4917068373235843143. HINFO: read udp 10.244.0.3:41491->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 386705444372259850.4917068373235843143. HINFO: read udp 10.244.0.3:43924->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 386705444372259850.4917068373235843143. HINFO: read udp 10.244.0.3:58160->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 386705444372259850.4917068373235843143. HINFO: read udp 10.244.0.3:35116->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 386705444372259850.4917068373235843143. HINFO: read udp 10.244.0.3:34657->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 386705444372259850.4917068373235843143. HINFO: read udp 10.244.0.3:44161->10.0.2.3:53: i/o timeout
	
	
	==> coredns [d3af68a4aad3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8943558382193152019.7096152756370628549. HINFO: read udp 10.244.0.2:45058->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8943558382193152019.7096152756370628549. HINFO: read udp 10.244.0.2:35331->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8943558382193152019.7096152756370628549. HINFO: read udp 10.244.0.2:56141->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8943558382193152019.7096152756370628549. HINFO: read udp 10.244.0.2:46051->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8943558382193152019.7096152756370628549. HINFO: read udp 10.244.0.2:32861->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8943558382193152019.7096152756370628549. HINFO: read udp 10.244.0.2:56740->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8943558382193152019.7096152756370628549. HINFO: read udp 10.244.0.2:44593->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f1d1743ca406] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7221119986111658316.257257408880527872. HINFO: read udp 10.244.0.2:47238->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7221119986111658316.257257408880527872. HINFO: read udp 10.244.0.2:50610->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7221119986111658316.257257408880527872. HINFO: read udp 10.244.0.2:55404->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7221119986111658316.257257408880527872. HINFO: read udp 10.244.0.2:40456->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7221119986111658316.257257408880527872. HINFO: read udp 10.244.0.2:39751->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7221119986111658316.257257408880527872. HINFO: read udp 10.244.0.2:59466->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7221119986111658316.257257408880527872. HINFO: read udp 10.244.0.2:55913->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7221119986111658316.257257408880527872. HINFO: read udp 10.244.0.2:59315->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7221119986111658316.257257408880527872. HINFO: read udp 10.244.0.2:43183->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7221119986111658316.257257408880527872. HINFO: read udp 10.244.0.2:48752->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-161000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-161000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=running-upgrade-161000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T10_47_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:47:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-161000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:51:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:47:38 +0000   Tue, 17 Sep 2024 17:47:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:47:38 +0000   Tue, 17 Sep 2024 17:47:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:47:38 +0000   Tue, 17 Sep 2024 17:47:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:47:38 +0000   Tue, 17 Sep 2024 17:47:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-161000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfcae6f6b8754efe9a25762ebf892385
	  System UUID:                dfcae6f6b8754efe9a25762ebf892385
	  Boot ID:                    89e8961c-0a7f-4eab-8ae6-c2b8f6f6e07e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-pshqp                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-zl2rw                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 etcd-running-upgrade-161000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-161000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-161000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-b8kkf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-161000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-161000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-161000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-161000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-161000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-161000 event: Registered Node running-upgrade-161000 in Controller
	
	
	==> dmesg <==
	[  +0.060076] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.086739] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.133343] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.085472] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.083550] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.074616] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +8.160292] systemd-fstab-generator[1916]: Ignoring "noauto" for root device
	[  +2.497908] systemd-fstab-generator[2193]: Ignoring "noauto" for root device
	[  +0.156393] systemd-fstab-generator[2227]: Ignoring "noauto" for root device
	[  +0.090088] systemd-fstab-generator[2238]: Ignoring "noauto" for root device
	[  +0.103125] systemd-fstab-generator[2251]: Ignoring "noauto" for root device
	[  +1.635758] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.148349] systemd-fstab-generator[2670]: Ignoring "noauto" for root device
	[  +0.085497] systemd-fstab-generator[2681]: Ignoring "noauto" for root device
	[  +0.085545] systemd-fstab-generator[2692]: Ignoring "noauto" for root device
	[  +0.095695] systemd-fstab-generator[2706]: Ignoring "noauto" for root device
	[  +2.706425] systemd-fstab-generator[3244]: Ignoring "noauto" for root device
	[  +3.044825] kauditd_printk_skb: 30 callbacks suppressed
	[  +1.467529] systemd-fstab-generator[3898]: Ignoring "noauto" for root device
	[  +0.843647] systemd-fstab-generator[4075]: Ignoring "noauto" for root device
	[ +20.081034] kauditd_printk_skb: 29 callbacks suppressed
	[Sep17 17:47] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.436058] systemd-fstab-generator[12294]: Ignoring "noauto" for root device
	[  +6.143256] systemd-fstab-generator[12898]: Ignoring "noauto" for root device
	[  +0.469516] systemd-fstab-generator[13033]: Ignoring "noauto" for root device
	
	
	==> etcd [00cb5784efec] <==
	{"level":"info","ts":"2024-09-17T17:47:33.230Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T17:47:33.230Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-17T17:47:33.230Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-17T17:47:33.230Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-17T17:47:33.230Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T17:47:33.230Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-17T17:47:33.231Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-17T17:47:34.181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-17T17:47:34.181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-17T17:47:34.181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-17T17:47:34.181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-17T17:47:34.181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-17T17:47:34.181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-17T17:47:34.181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-17T17:47:34.182Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:47:34.182Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:47:34.182Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:47:34.182Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:47:34.182Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-161000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T17:47:34.182Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:47:34.182Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:47:34.183Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T17:47:34.183Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T17:47:34.183Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T17:47:34.183Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 17:51:55 up 9 min,  0 users,  load average: 0.53, 0.49, 0.25
	Linux running-upgrade-161000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f177a5fd6d0a] <==
	I0917 17:47:35.369143       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0917 17:47:35.389443       1 cache.go:39] Caches are synced for autoregister controller
	I0917 17:47:35.389520       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0917 17:47:35.389537       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0917 17:47:35.392549       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0917 17:47:35.392738       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 17:47:35.407835       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0917 17:47:36.113703       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0917 17:47:36.286447       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0917 17:47:36.292734       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0917 17:47:36.292751       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 17:47:36.436878       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 17:47:36.449890       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 17:47:36.555765       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0917 17:47:36.557984       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0917 17:47:36.558352       1 controller.go:611] quota admission added evaluator for: endpoints
	I0917 17:47:36.559925       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 17:47:37.427941       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0917 17:47:38.137792       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0917 17:47:38.141232       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0917 17:47:38.146011       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0917 17:47:38.194983       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 17:47:51.499437       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0917 17:47:51.697594       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0917 17:47:52.221420       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [380aa7bba23d] <==
	I0917 17:47:51.006472       1 event.go:294] "Event occurred" object="running-upgrade-161000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-161000 event: Registered Node running-upgrade-161000 in Controller"
	I0917 17:47:51.037025       1 shared_informer.go:262] Caches are synced for deployment
	I0917 17:47:51.047079       1 shared_informer.go:262] Caches are synced for endpoint
	I0917 17:47:51.047079       1 shared_informer.go:262] Caches are synced for daemon sets
	I0917 17:47:51.047085       1 shared_informer.go:262] Caches are synced for ephemeral
	I0917 17:47:51.048116       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0917 17:47:51.097084       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0917 17:47:51.098074       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0917 17:47:51.112503       1 shared_informer.go:262] Caches are synced for crt configmap
	I0917 17:47:51.148487       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0917 17:47:51.154717       1 shared_informer.go:262] Caches are synced for resource quota
	I0917 17:47:51.197205       1 shared_informer.go:262] Caches are synced for HPA
	I0917 17:47:51.198431       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0917 17:47:51.200803       1 shared_informer.go:262] Caches are synced for resource quota
	I0917 17:47:51.246791       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0917 17:47:51.246795       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0917 17:47:51.246800       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0917 17:47:51.246805       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0917 17:47:51.501527       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0917 17:47:51.612738       1 shared_informer.go:262] Caches are synced for garbage collector
	I0917 17:47:51.632907       1 shared_informer.go:262] Caches are synced for garbage collector
	I0917 17:47:51.632914       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0917 17:47:51.700369       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-b8kkf"
	I0917 17:47:52.000428       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-pshqp"
	I0917 17:47:52.005659       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zl2rw"
	
	
	==> kube-proxy [0a180d04355d] <==
	I0917 17:47:52.201386       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0917 17:47:52.201411       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0917 17:47:52.201421       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0917 17:47:52.219660       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0917 17:47:52.219674       1 server_others.go:206] "Using iptables Proxier"
	I0917 17:47:52.219701       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0917 17:47:52.219819       1 server.go:661] "Version info" version="v1.24.1"
	I0917 17:47:52.219824       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:47:52.220084       1 config.go:317] "Starting service config controller"
	I0917 17:47:52.220097       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0917 17:47:52.220104       1 config.go:226] "Starting endpoint slice config controller"
	I0917 17:47:52.220106       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0917 17:47:52.220397       1 config.go:444] "Starting node config controller"
	I0917 17:47:52.220400       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0917 17:47:52.320613       1 shared_informer.go:262] Caches are synced for node config
	I0917 17:47:52.320628       1 shared_informer.go:262] Caches are synced for service config
	I0917 17:47:52.320640       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8c9778b91bff] <==
	W0917 17:47:35.349366       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 17:47:35.349369       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0917 17:47:35.349379       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 17:47:35.349382       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0917 17:47:35.349410       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 17:47:35.349417       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0917 17:47:35.349447       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 17:47:35.349454       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0917 17:47:35.349466       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 17:47:35.349471       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0917 17:47:35.349482       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 17:47:35.349484       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0917 17:47:35.349508       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 17:47:35.349515       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0917 17:47:35.349534       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 17:47:35.349537       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0917 17:47:35.349548       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 17:47:35.349554       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0917 17:47:35.349566       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 17:47:35.349569       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0917 17:47:35.349574       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 17:47:35.349578       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0917 17:47:36.378586       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 17:47:36.378602       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0917 17:47:36.948334       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-09-17 17:42:49 UTC, ends at Tue 2024-09-17 17:51:55 UTC. --
	Sep 17 17:47:39 running-upgrade-161000 kubelet[12904]: I0917 17:47:39.600182   12904 reconciler.go:157] "Reconciler: start to sync state"
	Sep 17 17:47:39 running-upgrade-161000 kubelet[12904]: E0917 17:47:39.775728   12904 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-161000\" already exists" pod="kube-system/etcd-running-upgrade-161000"
	Sep 17 17:47:39 running-upgrade-161000 kubelet[12904]: E0917 17:47:39.979877   12904 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-161000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-161000"
	Sep 17 17:47:40 running-upgrade-161000 kubelet[12904]: E0917 17:47:40.170641   12904 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-161000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-161000"
	Sep 17 17:47:50 running-upgrade-161000 kubelet[12904]: I0917 17:47:50.997071   12904 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 17 17:47:50 running-upgrade-161000 kubelet[12904]: I0917 17:47:50.997583   12904 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 17 17:47:51 running-upgrade-161000 kubelet[12904]: I0917 17:47:51.012257   12904 topology_manager.go:200] "Topology Admit Handler"
	Sep 17 17:47:51 running-upgrade-161000 kubelet[12904]: I0917 17:47:51.098260   12904 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkgxk\" (UniqueName: \"kubernetes.io/projected/81e91291-cd30-4a5c-aa3b-f0fb4062f3d4-kube-api-access-xkgxk\") pod \"storage-provisioner\" (UID: \"81e91291-cd30-4a5c-aa3b-f0fb4062f3d4\") " pod="kube-system/storage-provisioner"
	Sep 17 17:47:51 running-upgrade-161000 kubelet[12904]: I0917 17:47:51.098288   12904 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/81e91291-cd30-4a5c-aa3b-f0fb4062f3d4-tmp\") pod \"storage-provisioner\" (UID: \"81e91291-cd30-4a5c-aa3b-f0fb4062f3d4\") " pod="kube-system/storage-provisioner"
	Sep 17 17:47:51 running-upgrade-161000 kubelet[12904]: E0917 17:47:51.204093   12904 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 17 17:47:51 running-upgrade-161000 kubelet[12904]: E0917 17:47:51.204169   12904 projected.go:192] Error preparing data for projected volume kube-api-access-xkgxk for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 17 17:47:51 running-upgrade-161000 kubelet[12904]: E0917 17:47:51.204222   12904 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/81e91291-cd30-4a5c-aa3b-f0fb4062f3d4-kube-api-access-xkgxk podName:81e91291-cd30-4a5c-aa3b-f0fb4062f3d4 nodeName:}" failed. No retries permitted until 2024-09-17 17:47:51.704207837 +0000 UTC m=+13.578324015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xkgxk" (UniqueName: "kubernetes.io/projected/81e91291-cd30-4a5c-aa3b-f0fb4062f3d4-kube-api-access-xkgxk") pod "storage-provisioner" (UID: "81e91291-cd30-4a5c-aa3b-f0fb4062f3d4") : configmap "kube-root-ca.crt" not found
	Sep 17 17:47:51 running-upgrade-161000 kubelet[12904]: I0917 17:47:51.703483   12904 topology_manager.go:200] "Topology Admit Handler"
	Sep 17 17:47:51 running-upgrade-161000 kubelet[12904]: I0917 17:47:51.901917   12904 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6761e8a-7e76-42f8-8719-a1e08eaa6dac-kube-proxy\") pod \"kube-proxy-b8kkf\" (UID: \"d6761e8a-7e76-42f8-8719-a1e08eaa6dac\") " pod="kube-system/kube-proxy-b8kkf"
	Sep 17 17:47:51 running-upgrade-161000 kubelet[12904]: I0917 17:47:51.901944   12904 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r5k5\" (UniqueName: \"kubernetes.io/projected/d6761e8a-7e76-42f8-8719-a1e08eaa6dac-kube-api-access-7r5k5\") pod \"kube-proxy-b8kkf\" (UID: \"d6761e8a-7e76-42f8-8719-a1e08eaa6dac\") " pod="kube-system/kube-proxy-b8kkf"
	Sep 17 17:47:51 running-upgrade-161000 kubelet[12904]: I0917 17:47:51.901956   12904 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6761e8a-7e76-42f8-8719-a1e08eaa6dac-xtables-lock\") pod \"kube-proxy-b8kkf\" (UID: \"d6761e8a-7e76-42f8-8719-a1e08eaa6dac\") " pod="kube-system/kube-proxy-b8kkf"
	Sep 17 17:47:51 running-upgrade-161000 kubelet[12904]: I0917 17:47:51.901967   12904 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6761e8a-7e76-42f8-8719-a1e08eaa6dac-lib-modules\") pod \"kube-proxy-b8kkf\" (UID: \"d6761e8a-7e76-42f8-8719-a1e08eaa6dac\") " pod="kube-system/kube-proxy-b8kkf"
	Sep 17 17:47:52 running-upgrade-161000 kubelet[12904]: I0917 17:47:52.004974   12904 topology_manager.go:200] "Topology Admit Handler"
	Sep 17 17:47:52 running-upgrade-161000 kubelet[12904]: I0917 17:47:52.012579   12904 topology_manager.go:200] "Topology Admit Handler"
	Sep 17 17:47:52 running-upgrade-161000 kubelet[12904]: I0917 17:47:52.104115   12904 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9q5r\" (UniqueName: \"kubernetes.io/projected/c49fcb05-ee80-4599-8048-883744a03b2c-kube-api-access-x9q5r\") pod \"coredns-6d4b75cb6d-pshqp\" (UID: \"c49fcb05-ee80-4599-8048-883744a03b2c\") " pod="kube-system/coredns-6d4b75cb6d-pshqp"
	Sep 17 17:47:52 running-upgrade-161000 kubelet[12904]: I0917 17:47:52.104137   12904 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c49fcb05-ee80-4599-8048-883744a03b2c-config-volume\") pod \"coredns-6d4b75cb6d-pshqp\" (UID: \"c49fcb05-ee80-4599-8048-883744a03b2c\") " pod="kube-system/coredns-6d4b75cb6d-pshqp"
	Sep 17 17:47:52 running-upgrade-161000 kubelet[12904]: I0917 17:47:52.104148   12904 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71c995e5-65a9-4ee8-b875-d428b2a9b2aa-config-volume\") pod \"coredns-6d4b75cb6d-zl2rw\" (UID: \"71c995e5-65a9-4ee8-b875-d428b2a9b2aa\") " pod="kube-system/coredns-6d4b75cb6d-zl2rw"
	Sep 17 17:47:52 running-upgrade-161000 kubelet[12904]: I0917 17:47:52.104158   12904 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br6wg\" (UniqueName: \"kubernetes.io/projected/71c995e5-65a9-4ee8-b875-d428b2a9b2aa-kube-api-access-br6wg\") pod \"coredns-6d4b75cb6d-zl2rw\" (UID: \"71c995e5-65a9-4ee8-b875-d428b2a9b2aa\") " pod="kube-system/coredns-6d4b75cb6d-zl2rw"
	Sep 17 17:51:30 running-upgrade-161000 kubelet[12904]: I0917 17:51:30.384678   12904 scope.go:110] "RemoveContainer" containerID="66f12769ce86f5c38e9c42f7fd0a9a913c206e9eb25b52099f6e06ca26d76c61"
	Sep 17 17:51:30 running-upgrade-161000 kubelet[12904]: I0917 17:51:30.396337   12904 scope.go:110] "RemoveContainer" containerID="36a29861218c2878d6eb46f8ef318fabbefadb20ed655cef94dc8c180598f77a"
	
	
	==> storage-provisioner [6dbc9510eace] <==
	I0917 17:47:52.179169       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 17:47:52.183669       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 17:47:52.184262       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 17:47:52.187533       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 17:47:52.187736       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a48b6af5-e8da-4bf6-9d72-2b71dc8f182f", APIVersion:"v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-161000_ad7de6dd-adbd-472f-93a2-88c0bada8b32 became leader
	I0917 17:47:52.187796       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-161000_ad7de6dd-adbd-472f-93a2-88c0bada8b32!
	I0917 17:47:52.289102       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-161000_ad7de6dd-adbd-472f-93a2-88c0bada8b32!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-161000 -n running-upgrade-161000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-161000 -n running-upgrade-161000: exit status 2 (15.628166375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-161000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-161000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-161000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-161000: (1.151586958s)
--- FAIL: TestRunningBinaryUpgrade (597.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-875000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-875000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.808683375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-875000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-875000" primary control-plane node in "kubernetes-upgrade-875000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-875000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:45:14.781073    4816 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:45:14.781228    4816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:45:14.781232    4816 out.go:358] Setting ErrFile to fd 2...
	I0917 10:45:14.781235    4816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:45:14.781350    4816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:45:14.782517    4816 out.go:352] Setting JSON to false
	I0917 10:45:14.799574    4816 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4477,"bootTime":1726590637,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:45:14.799642    4816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:45:14.805331    4816 out.go:177] * [kubernetes-upgrade-875000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:45:14.813057    4816 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:45:14.813099    4816 notify.go:220] Checking for updates...
	I0917 10:45:14.820021    4816 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:45:14.822977    4816 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:45:14.826003    4816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:45:14.829006    4816 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:45:14.832004    4816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:45:14.835313    4816 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:45:14.835377    4816 config.go:182] Loaded profile config "running-upgrade-161000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:45:14.835415    4816 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:45:14.838951    4816 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:45:14.845990    4816 start.go:297] selected driver: qemu2
	I0917 10:45:14.845996    4816 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:45:14.846001    4816 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:45:14.848190    4816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:45:14.849685    4816 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:45:14.852992    4816 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 10:45:14.853004    4816 cni.go:84] Creating CNI manager for ""
	I0917 10:45:14.853024    4816 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 10:45:14.853054    4816 start.go:340] cluster config:
	{Name:kubernetes-upgrade-875000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:45:14.856550    4816 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:45:14.863996    4816 out.go:177] * Starting "kubernetes-upgrade-875000" primary control-plane node in "kubernetes-upgrade-875000" cluster
	I0917 10:45:14.867996    4816 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 10:45:14.868008    4816 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 10:45:14.868015    4816 cache.go:56] Caching tarball of preloaded images
	I0917 10:45:14.868071    4816 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:45:14.868077    4816 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 10:45:14.868121    4816 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/kubernetes-upgrade-875000/config.json ...
	I0917 10:45:14.868131    4816 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/kubernetes-upgrade-875000/config.json: {Name:mkf13f794caa953e391f0ad2faa51a95b0ed469b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:45:14.868458    4816 start.go:360] acquireMachinesLock for kubernetes-upgrade-875000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:45:14.868489    4816 start.go:364] duration metric: took 24.833µs to acquireMachinesLock for "kubernetes-upgrade-875000"
	I0917 10:45:14.868498    4816 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:45:14.868520    4816 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:45:14.876958    4816 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:45:14.892489    4816 start.go:159] libmachine.API.Create for "kubernetes-upgrade-875000" (driver="qemu2")
	I0917 10:45:14.892518    4816 client.go:168] LocalClient.Create starting
	I0917 10:45:14.892575    4816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:45:14.892609    4816 main.go:141] libmachine: Decoding PEM data...
	I0917 10:45:14.892621    4816 main.go:141] libmachine: Parsing certificate...
	I0917 10:45:14.892659    4816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:45:14.892682    4816 main.go:141] libmachine: Decoding PEM data...
	I0917 10:45:14.892691    4816 main.go:141] libmachine: Parsing certificate...
	I0917 10:45:14.893055    4816 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:45:15.061778    4816 main.go:141] libmachine: Creating SSH key...
	I0917 10:45:15.102928    4816 main.go:141] libmachine: Creating Disk image...
	I0917 10:45:15.102934    4816 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:45:15.103120    4816 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2
	I0917 10:45:15.112414    4816 main.go:141] libmachine: STDOUT: 
	I0917 10:45:15.112431    4816 main.go:141] libmachine: STDERR: 
	I0917 10:45:15.112503    4816 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2 +20000M
	I0917 10:45:15.120289    4816 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:45:15.120306    4816 main.go:141] libmachine: STDERR: 
	I0917 10:45:15.120326    4816 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2
	I0917 10:45:15.120332    4816 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:45:15.120346    4816 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:45:15.120379    4816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:05:a6:e9:a0:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2
	I0917 10:45:15.121907    4816 main.go:141] libmachine: STDOUT: 
	I0917 10:45:15.121931    4816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:45:15.121951    4816 client.go:171] duration metric: took 229.433291ms to LocalClient.Create
	I0917 10:45:17.124089    4816 start.go:128] duration metric: took 2.25561225s to createHost
	I0917 10:45:17.124163    4816 start.go:83] releasing machines lock for "kubernetes-upgrade-875000", held for 2.255737916s
	W0917 10:45:17.124196    4816 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:45:17.133618    4816 out.go:177] * Deleting "kubernetes-upgrade-875000" in qemu2 ...
	W0917 10:45:17.165385    4816 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:45:17.165413    4816 start.go:729] Will try again in 5 seconds ...
	I0917 10:45:22.167519    4816 start.go:360] acquireMachinesLock for kubernetes-upgrade-875000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:45:22.167949    4816 start.go:364] duration metric: took 345.541µs to acquireMachinesLock for "kubernetes-upgrade-875000"
	I0917 10:45:22.168005    4816 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:45:22.168269    4816 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:45:22.175754    4816 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:45:22.219062    4816 start.go:159] libmachine.API.Create for "kubernetes-upgrade-875000" (driver="qemu2")
	I0917 10:45:22.219109    4816 client.go:168] LocalClient.Create starting
	I0917 10:45:22.219218    4816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:45:22.219284    4816 main.go:141] libmachine: Decoding PEM data...
	I0917 10:45:22.219299    4816 main.go:141] libmachine: Parsing certificate...
	I0917 10:45:22.219349    4816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:45:22.219392    4816 main.go:141] libmachine: Decoding PEM data...
	I0917 10:45:22.219401    4816 main.go:141] libmachine: Parsing certificate...
	I0917 10:45:22.219904    4816 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:45:22.393645    4816 main.go:141] libmachine: Creating SSH key...
	I0917 10:45:22.492933    4816 main.go:141] libmachine: Creating Disk image...
	I0917 10:45:22.492945    4816 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:45:22.493141    4816 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2
	I0917 10:45:22.502448    4816 main.go:141] libmachine: STDOUT: 
	I0917 10:45:22.502464    4816 main.go:141] libmachine: STDERR: 
	I0917 10:45:22.502529    4816 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2 +20000M
	I0917 10:45:22.510323    4816 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:45:22.510337    4816 main.go:141] libmachine: STDERR: 
	I0917 10:45:22.510349    4816 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2
	I0917 10:45:22.510364    4816 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:45:22.510375    4816 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:45:22.510421    4816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:46:f5:48:a8:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2
	I0917 10:45:22.512089    4816 main.go:141] libmachine: STDOUT: 
	I0917 10:45:22.512108    4816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:45:22.512122    4816 client.go:171] duration metric: took 293.016625ms to LocalClient.Create
	I0917 10:45:24.514290    4816 start.go:128] duration metric: took 2.346057917s to createHost
	I0917 10:45:24.514391    4816 start.go:83] releasing machines lock for "kubernetes-upgrade-875000", held for 2.346496167s
	W0917 10:45:24.514884    4816 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-875000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-875000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:45:24.524572    4816 out.go:201] 
	W0917 10:45:24.533689    4816 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:45:24.533716    4816 out.go:270] * 
	* 
	W0917 10:45:24.536240    4816 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:45:24.546604    4816 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-875000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-875000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-875000: (2.175668209s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-875000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-875000 status --format={{.Host}}: exit status 7 (55.091125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-875000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-875000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.173485875s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-875000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-875000" primary control-plane node in "kubernetes-upgrade-875000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-875000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-875000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:45:26.824941    4846 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:45:26.825067    4846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:45:26.825070    4846 out.go:358] Setting ErrFile to fd 2...
	I0917 10:45:26.825073    4846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:45:26.825193    4846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:45:26.826236    4846 out.go:352] Setting JSON to false
	I0917 10:45:26.842609    4846 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4489,"bootTime":1726590637,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:45:26.842674    4846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:45:26.848348    4846 out.go:177] * [kubernetes-upgrade-875000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:45:26.856335    4846 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:45:26.856409    4846 notify.go:220] Checking for updates...
	I0917 10:45:26.862301    4846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:45:26.865259    4846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:45:26.868308    4846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:45:26.871312    4846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:45:26.874267    4846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:45:26.877529    4846 config.go:182] Loaded profile config "kubernetes-upgrade-875000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0917 10:45:26.877787    4846 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:45:26.882261    4846 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:45:26.889279    4846 start.go:297] selected driver: qemu2
	I0917 10:45:26.889285    4846 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:45:26.889339    4846 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:45:26.891615    4846 cni.go:84] Creating CNI manager for ""
	I0917 10:45:26.891669    4846 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:45:26.891695    4846 start.go:340] cluster config:
	{Name:kubernetes-upgrade-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-875000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:45:26.895347    4846 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:45:26.903316    4846 out.go:177] * Starting "kubernetes-upgrade-875000" primary control-plane node in "kubernetes-upgrade-875000" cluster
	I0917 10:45:26.907247    4846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:45:26.907261    4846 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:45:26.907267    4846 cache.go:56] Caching tarball of preloaded images
	I0917 10:45:26.907329    4846 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:45:26.907334    4846 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:45:26.907382    4846 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/kubernetes-upgrade-875000/config.json ...
	I0917 10:45:26.907933    4846 start.go:360] acquireMachinesLock for kubernetes-upgrade-875000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:45:26.907959    4846 start.go:364] duration metric: took 19.958µs to acquireMachinesLock for "kubernetes-upgrade-875000"
	I0917 10:45:26.907967    4846 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:45:26.907973    4846 fix.go:54] fixHost starting: 
	I0917 10:45:26.908077    4846 fix.go:112] recreateIfNeeded on kubernetes-upgrade-875000: state=Stopped err=<nil>
	W0917 10:45:26.908084    4846 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:45:26.915259    4846 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-875000" ...
	I0917 10:45:26.919277    4846 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:45:26.919317    4846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:46:f5:48:a8:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2
	I0917 10:45:26.921303    4846 main.go:141] libmachine: STDOUT: 
	I0917 10:45:26.921319    4846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:45:26.921347    4846 fix.go:56] duration metric: took 13.37575ms for fixHost
	I0917 10:45:26.921352    4846 start.go:83] releasing machines lock for "kubernetes-upgrade-875000", held for 13.389375ms
	W0917 10:45:26.921357    4846 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:45:26.921391    4846 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:45:26.921395    4846 start.go:729] Will try again in 5 seconds ...
	I0917 10:45:31.923407    4846 start.go:360] acquireMachinesLock for kubernetes-upgrade-875000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:45:31.923550    4846 start.go:364] duration metric: took 115.75µs to acquireMachinesLock for "kubernetes-upgrade-875000"
	I0917 10:45:31.923568    4846 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:45:31.923572    4846 fix.go:54] fixHost starting: 
	I0917 10:45:31.923718    4846 fix.go:112] recreateIfNeeded on kubernetes-upgrade-875000: state=Stopped err=<nil>
	W0917 10:45:31.923724    4846 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:45:31.931910    4846 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-875000" ...
	I0917 10:45:31.934890    4846 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:45:31.934950    4846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:46:f5:48:a8:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubernetes-upgrade-875000/disk.qcow2
	I0917 10:45:31.936930    4846 main.go:141] libmachine: STDOUT: 
	I0917 10:45:31.936944    4846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:45:31.936962    4846 fix.go:56] duration metric: took 13.391208ms for fixHost
	I0917 10:45:31.936966    4846 start.go:83] releasing machines lock for "kubernetes-upgrade-875000", held for 13.411834ms
	W0917 10:45:31.937009    4846 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-875000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-875000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:45:31.944847    4846 out.go:201] 
	W0917 10:45:31.948872    4846 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:45:31.948882    4846 out.go:270] * 
	* 
	W0917 10:45:31.949331    4846 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:45:31.959802    4846 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-875000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-875000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-875000 version --output=json: exit status 1 (28.082959ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-875000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-17 10:45:31.996384 -0700 PDT m=+3011.026995084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-875000 -n kubernetes-upgrade-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-875000 -n kubernetes-upgrade-875000: exit status 7 (30.880209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-875000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-875000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-875000
--- FAIL: TestKubernetesUpgrade (17.36s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.41s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19662
- KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4063657049/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.41s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.08s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19662
- KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4254407343/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (572.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.580693345 start -p stopped-upgrade-293000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.580693345 start -p stopped-upgrade-293000 --memory=2200 --vm-driver=qemu2 : (39.570718667s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.580693345 -p stopped-upgrade-293000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.580693345 -p stopped-upgrade-293000 stop: (12.124253208s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-293000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0917 10:47:09.503516    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:48:42.246607    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:49:06.401318    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-293000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.055310792s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-293000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-293000" primary control-plane node in "stopped-upgrade-293000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-293000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:46:26.071112    4887 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:46:26.071275    4887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:46:26.071282    4887 out.go:358] Setting ErrFile to fd 2...
	I0917 10:46:26.071285    4887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:46:26.071436    4887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:46:26.072723    4887 out.go:352] Setting JSON to false
	I0917 10:46:26.091184    4887 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4549,"bootTime":1726590637,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:46:26.091250    4887 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:46:26.095204    4887 out.go:177] * [stopped-upgrade-293000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:46:26.103127    4887 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:46:26.103163    4887 notify.go:220] Checking for updates...
	I0917 10:46:26.110107    4887 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:46:26.113132    4887 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:46:26.116162    4887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:46:26.119103    4887 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:46:26.122175    4887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:46:26.125304    4887 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:46:26.128082    4887 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 10:46:26.131158    4887 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:46:26.134082    4887 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:46:26.141117    4887 start.go:297] selected driver: qemu2
	I0917 10:46:26.141127    4887 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50495 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 10:46:26.141198    4887 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:46:26.143982    4887 cni.go:84] Creating CNI manager for ""
	I0917 10:46:26.144014    4887 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:46:26.144041    4887 start.go:340] cluster config:
	{Name:stopped-upgrade-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50495 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 10:46:26.144092    4887 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:46:26.152087    4887 out.go:177] * Starting "stopped-upgrade-293000" primary control-plane node in "stopped-upgrade-293000" cluster
	I0917 10:46:26.156164    4887 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0917 10:46:26.156180    4887 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0917 10:46:26.156187    4887 cache.go:56] Caching tarball of preloaded images
	I0917 10:46:26.156259    4887 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:46:26.156265    4887 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0917 10:46:26.156320    4887 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/config.json ...
	I0917 10:46:26.156790    4887 start.go:360] acquireMachinesLock for stopped-upgrade-293000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:46:26.156825    4887 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "stopped-upgrade-293000"
	I0917 10:46:26.156833    4887 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:46:26.156840    4887 fix.go:54] fixHost starting: 
	I0917 10:46:26.156951    4887 fix.go:112] recreateIfNeeded on stopped-upgrade-293000: state=Stopped err=<nil>
	W0917 10:46:26.156959    4887 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:46:26.165146    4887 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-293000" ...
	I0917 10:46:26.169069    4887 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:46:26.169145    4887 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50461-:22,hostfwd=tcp::50462-:2376,hostname=stopped-upgrade-293000 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/disk.qcow2
	I0917 10:46:26.215430    4887 main.go:141] libmachine: STDOUT: 
	I0917 10:46:26.215458    4887 main.go:141] libmachine: STDERR: 
	I0917 10:46:26.215465    4887 main.go:141] libmachine: Waiting for VM to start (ssh -p 50461 docker@127.0.0.1)...
	I0917 10:46:46.310628    4887 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/config.json ...
	I0917 10:46:46.311095    4887 machine.go:93] provisionDockerMachine start ...
	I0917 10:46:46.311188    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:46.311467    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:46.311477    4887 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:46:46.382352    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:46:46.382373    4887 buildroot.go:166] provisioning hostname "stopped-upgrade-293000"
	I0917 10:46:46.382449    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:46.382659    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:46.382671    4887 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-293000 && echo "stopped-upgrade-293000" | sudo tee /etc/hostname
	I0917 10:46:46.455302    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-293000
	
	I0917 10:46:46.455377    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:46.455516    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:46.455526    4887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-293000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-293000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-293000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:46:46.523867    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:46:46.523882    4887 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1312/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1312/.minikube}
	I0917 10:46:46.523890    4887 buildroot.go:174] setting up certificates
	I0917 10:46:46.523901    4887 provision.go:84] configureAuth start
	I0917 10:46:46.523909    4887 provision.go:143] copyHostCerts
	I0917 10:46:46.523979    4887 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1312/.minikube/key.pem, removing ...
	I0917 10:46:46.523988    4887 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1312/.minikube/key.pem
	I0917 10:46:46.524100    4887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1312/.minikube/key.pem (1679 bytes)
	I0917 10:46:46.524312    4887 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.pem, removing ...
	I0917 10:46:46.524317    4887 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.pem
	I0917 10:46:46.524380    4887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.pem (1078 bytes)
	I0917 10:46:46.524496    4887 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1312/.minikube/cert.pem, removing ...
	I0917 10:46:46.524500    4887 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1312/.minikube/cert.pem
	I0917 10:46:46.524553    4887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1312/.minikube/cert.pem (1123 bytes)
	I0917 10:46:46.524660    4887 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-293000 san=[127.0.0.1 localhost minikube stopped-upgrade-293000]
	I0917 10:46:46.630770    4887 provision.go:177] copyRemoteCerts
	I0917 10:46:46.630813    4887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:46:46.630821    4887 sshutil.go:53] new ssh client: &{IP:localhost Port:50461 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/id_rsa Username:docker}
	I0917 10:46:46.663556    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:46:46.670222    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 10:46:46.676807    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:46:46.684254    4887 provision.go:87] duration metric: took 160.34675ms to configureAuth
	I0917 10:46:46.684263    4887 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:46:46.684381    4887 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:46:46.684421    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:46.684518    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:46.684523    4887 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:46:46.742033    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:46:46.742044    4887 buildroot.go:70] root file system type: tmpfs
	I0917 10:46:46.742095    4887 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:46:46.742168    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:46.742296    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:46.742330    4887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:46:46.805196    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:46:46.805259    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:46.805378    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:46.805387    4887 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:46:47.180787    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:46:47.180802    4887 machine.go:96] duration metric: took 869.72525ms to provisionDockerMachine
	I0917 10:46:47.180810    4887 start.go:293] postStartSetup for "stopped-upgrade-293000" (driver="qemu2")
	I0917 10:46:47.180816    4887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:46:47.180884    4887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:46:47.180894    4887 sshutil.go:53] new ssh client: &{IP:localhost Port:50461 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/id_rsa Username:docker}
	I0917 10:46:47.214225    4887 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:46:47.215396    4887 info.go:137] Remote host: Buildroot 2021.02.12
	I0917 10:46:47.215404    4887 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1312/.minikube/addons for local assets ...
	I0917 10:46:47.215483    4887 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1312/.minikube/files for local assets ...
	I0917 10:46:47.215582    4887 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/ssl/certs/18402.pem -> 18402.pem in /etc/ssl/certs
	I0917 10:46:47.215674    4887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:46:47.218058    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/ssl/certs/18402.pem --> /etc/ssl/certs/18402.pem (1708 bytes)
	I0917 10:46:47.224723    4887 start.go:296] duration metric: took 43.909291ms for postStartSetup
	I0917 10:46:47.224739    4887 fix.go:56] duration metric: took 21.068553833s for fixHost
	I0917 10:46:47.224774    4887 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:47.224879    4887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c45190] 0x102c479d0 <nil>  [] 0s} localhost 50461 <nil> <nil>}
	I0917 10:46:47.224888    4887 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:46:47.282820    4887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726595207.290094088
	
	I0917 10:46:47.282828    4887 fix.go:216] guest clock: 1726595207.290094088
	I0917 10:46:47.282832    4887 fix.go:229] Guest: 2024-09-17 10:46:47.290094088 -0700 PDT Remote: 2024-09-17 10:46:47.224741 -0700 PDT m=+21.182421001 (delta=65.353088ms)
	I0917 10:46:47.282844    4887 fix.go:200] guest clock delta is within tolerance: 65.353088ms
	I0917 10:46:47.282847    4887 start.go:83] releasing machines lock for "stopped-upgrade-293000", held for 21.126671375s
	I0917 10:46:47.282921    4887 ssh_runner.go:195] Run: cat /version.json
	I0917 10:46:47.282921    4887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:46:47.282929    4887 sshutil.go:53] new ssh client: &{IP:localhost Port:50461 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/id_rsa Username:docker}
	I0917 10:46:47.282943    4887 sshutil.go:53] new ssh client: &{IP:localhost Port:50461 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/id_rsa Username:docker}
	W0917 10:46:47.283512    4887 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50461: connect: connection refused
	I0917 10:46:47.283532    4887 retry.go:31] will retry after 162.297452ms: dial tcp [::1]:50461: connect: connection refused
	W0917 10:46:47.482841    4887 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0917 10:46:47.482920    4887 ssh_runner.go:195] Run: systemctl --version
	I0917 10:46:47.485381    4887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 10:46:47.487830    4887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:46:47.487864    4887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0917 10:46:47.491756    4887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0917 10:46:47.497203    4887 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:46:47.497212    4887 start.go:495] detecting cgroup driver to use...
	I0917 10:46:47.497290    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:46:47.505632    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0917 10:46:47.508920    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:46:47.512129    4887 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:46:47.512155    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:46:47.515260    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:46:47.517967    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:46:47.520836    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:46:47.524323    4887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:46:47.527568    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:46:47.530481    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:46:47.533260    4887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:46:47.536646    4887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:46:47.539698    4887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:46:47.542251    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:47.622376    4887 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:46:47.633487    4887 start.go:495] detecting cgroup driver to use...
	I0917 10:46:47.633557    4887 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:46:47.638776    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:46:47.643410    4887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:46:47.651090    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:46:47.655727    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:46:47.660168    4887 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:46:47.716185    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:46:47.721594    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:46:47.726752    4887 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:46:47.728079    4887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:46:47.730948    4887 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0917 10:46:47.735963    4887 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:46:47.811705    4887 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:46:47.873569    4887 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:46:47.873630    4887 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:46:47.878968    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:47.954904    4887 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:46:49.085820    4887 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.130934333s)
	I0917 10:46:49.085897    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:46:49.090631    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:46:49.095055    4887 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:46:49.175831    4887 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:46:49.248923    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:49.331153    4887 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:46:49.337304    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:46:49.342316    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:49.418184    4887 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:46:49.455824    4887 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:46:49.455924    4887 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:46:49.458038    4887 start.go:563] Will wait 60s for crictl version
	I0917 10:46:49.458097    4887 ssh_runner.go:195] Run: which crictl
	I0917 10:46:49.459474    4887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:46:49.474058    4887 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0917 10:46:49.474142    4887 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:46:49.489904    4887 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:46:49.510949    4887 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0917 10:46:49.511100    4887 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0917 10:46:49.512393    4887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:46:49.516466    4887 kubeadm.go:883] updating cluster {Name:stopped-upgrade-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50495 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0917 10:46:49.516518    4887 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0917 10:46:49.516571    4887 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:46:49.527672    4887 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 10:46:49.527680    4887 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0917 10:46:49.527733    4887 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 10:46:49.531103    4887 ssh_runner.go:195] Run: which lz4
	I0917 10:46:49.532303    4887 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 10:46:49.533538    4887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 10:46:49.533550    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0917 10:46:50.417118    4887 docker.go:649] duration metric: took 884.879459ms to copy over tarball
	I0917 10:46:50.417182    4887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 10:46:51.606313    4887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.189154459s)
	I0917 10:46:51.606326    4887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 10:46:51.621630    4887 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 10:46:51.624860    4887 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0917 10:46:51.629987    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:51.707451    4887 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:46:53.144182    4887 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.43675925s)
	I0917 10:46:53.144305    4887 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:46:53.159053    4887 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 10:46:53.159063    4887 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0917 10:46:53.159068    4887 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 10:46:53.163962    4887 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:46:53.166448    4887 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:46:53.168169    4887 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:46:53.168687    4887 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:46:53.171039    4887 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:46:53.171465    4887 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:46:53.172147    4887 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:46:53.172146    4887 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:46:53.173816    4887 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0917 10:46:53.175133    4887 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0917 10:46:53.175140    4887 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:46:53.176421    4887 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0917 10:46:53.176437    4887 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:46:53.176391    4887 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:46:53.178379    4887 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0917 10:46:53.179089    4887 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:46:53.599534    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:46:53.610704    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:46:53.615260    4887 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0917 10:46:53.615297    4887 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:46:53.615364    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0917 10:46:53.621154    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:46:53.629065    4887 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0917 10:46:53.629075    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0917 10:46:53.629086    4887 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:46:53.629138    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0917 10:46:53.635037    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0917 10:46:53.640604    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:46:53.646369    4887 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0917 10:46:53.646386    4887 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:46:53.646434    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0917 10:46:53.646695    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0917 10:46:53.659835    4887 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0917 10:46:53.659854    4887 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0917 10:46:53.659915    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0917 10:46:53.669468    4887 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0917 10:46:53.669475    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0917 10:46:53.669492    4887 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:46:53.669554    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 10:46:53.676873    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0917 10:46:53.685049    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0917 10:46:53.685095    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0917 10:46:53.685186    4887 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0917 10:46:53.690816    4887 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0917 10:46:53.690837    4887 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0917 10:46:53.690907    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0917 10:46:53.691460    4887 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0917 10:46:53.691471    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0917 10:46:53.703106    4887 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0917 10:46:53.703252    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:46:53.716130    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0917 10:46:53.716263    4887 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0917 10:46:53.726090    4887 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0917 10:46:53.726116    4887 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:46:53.726193    4887 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0917 10:46:53.726764    4887 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0917 10:46:53.726782    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0917 10:46:53.745878    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0917 10:46:53.746006    4887 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0917 10:46:53.761284    4887 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0917 10:46:53.761311    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0917 10:46:53.768695    4887 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0917 10:46:53.768709    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0917 10:46:53.846546    4887 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0917 10:46:53.859844    4887 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0917 10:46:53.859860    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0917 10:46:53.976760    4887 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0917 10:46:54.021641    4887 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0917 10:46:54.021765    4887 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:46:54.024499    4887 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0917 10:46:54.024508    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0917 10:46:54.036108    4887 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0917 10:46:54.036136    4887 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:46:54.036213    4887 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:46:54.173280    4887 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0917 10:46:54.173298    4887 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 10:46:54.173426    4887 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0917 10:46:54.174858    4887 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0917 10:46:54.174868    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0917 10:46:54.201251    4887 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0917 10:46:54.201273    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0917 10:46:54.438904    4887 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0917 10:46:54.438948    4887 cache_images.go:92] duration metric: took 1.2799125s to LoadCachedImages
	W0917 10:46:54.438994    4887 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0917 10:46:54.439002    4887 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0917 10:46:54.439050    4887 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-293000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:46:54.439126    4887 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 10:46:54.452601    4887 cni.go:84] Creating CNI manager for ""
	I0917 10:46:54.452614    4887 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:46:54.452627    4887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 10:46:54.452635    4887 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-293000 NodeName:stopped-upgrade-293000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 10:46:54.452696    4887 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-293000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 10:46:54.452758    4887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0917 10:46:54.455974    4887 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:46:54.456010    4887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 10:46:54.459026    4887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0917 10:46:54.464843    4887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:46:54.469772    4887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0917 10:46:54.474866    4887 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0917 10:46:54.476112    4887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:46:54.479944    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:54.561695    4887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:46:54.571584    4887 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000 for IP: 10.0.2.15
	I0917 10:46:54.571594    4887 certs.go:194] generating shared ca certs ...
	I0917 10:46:54.571603    4887 certs.go:226] acquiring lock for ca certs: {Name:mk1d9837d65f8f1762ad8daf2cfbb53face1f201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:46:54.571764    4887 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.key
	I0917 10:46:54.571803    4887 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.key
	I0917 10:46:54.571809    4887 certs.go:256] generating profile certs ...
	I0917 10:46:54.571881    4887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/client.key
	I0917 10:46:54.571900    4887 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.key.adb24236
	I0917 10:46:54.571912    4887 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.crt.adb24236 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0917 10:46:54.637794    4887 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.crt.adb24236 ...
	I0917 10:46:54.637809    4887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.crt.adb24236: {Name:mk34090c95e504420b3662e3619686681165024e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:46:54.638120    4887 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.key.adb24236 ...
	I0917 10:46:54.638125    4887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.key.adb24236: {Name:mk506bcbcf66d39a99d777a5b316d23fed4c628b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:46:54.638265    4887 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.crt.adb24236 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.crt
	I0917 10:46:54.638397    4887 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.key.adb24236 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.key
	I0917 10:46:54.638533    4887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/proxy-client.key
	I0917 10:46:54.638668    4887 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/1840.pem (1338 bytes)
	W0917 10:46:54.638689    4887 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/1840_empty.pem, impossibly tiny 0 bytes
	I0917 10:46:54.638696    4887 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:46:54.638715    4887 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:46:54.638733    4887 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:46:54.638753    4887 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/key.pem (1679 bytes)
	I0917 10:46:54.638791    4887 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/ssl/certs/18402.pem (1708 bytes)
	I0917 10:46:54.639126    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:46:54.646181    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:46:54.652969    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:46:54.660115    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:46:54.666835    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 10:46:54.673633    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:46:54.680450    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:46:54.687834    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:46:54.695260    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/ssl/certs/18402.pem --> /usr/share/ca-certificates/18402.pem (1708 bytes)
	I0917 10:46:54.702445    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:46:54.709316    4887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/1840.pem --> /usr/share/ca-certificates/1840.pem (1338 bytes)
	I0917 10:46:54.716290    4887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 10:46:54.721329    4887 ssh_runner.go:195] Run: openssl version
	I0917 10:46:54.723293    4887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18402.pem && ln -fs /usr/share/ca-certificates/18402.pem /etc/ssl/certs/18402.pem"
	I0917 10:46:54.726367    4887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18402.pem
	I0917 10:46:54.727730    4887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:11 /usr/share/ca-certificates/18402.pem
	I0917 10:46:54.727758    4887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18402.pem
	I0917 10:46:54.729616    4887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18402.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:46:54.732683    4887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:46:54.736042    4887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:46:54.737592    4887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:46:54.737624    4887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:46:54.739323    4887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:46:54.742094    4887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1840.pem && ln -fs /usr/share/ca-certificates/1840.pem /etc/ssl/certs/1840.pem"
	I0917 10:46:54.744969    4887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1840.pem
	I0917 10:46:54.746297    4887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:11 /usr/share/ca-certificates/1840.pem
	I0917 10:46:54.746320    4887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1840.pem
	I0917 10:46:54.747955    4887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1840.pem /etc/ssl/certs/51391683.0"
	I0917 10:46:54.751328    4887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:46:54.752755    4887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:46:54.754993    4887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:46:54.756906    4887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:46:54.759024    4887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:46:54.760741    4887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:46:54.762520    4887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:46:54.764418    4887 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50495 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 10:46:54.764491    4887 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:46:54.774726    4887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 10:46:54.777915    4887 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 10:46:54.777927    4887 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 10:46:54.777959    4887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 10:46:54.780875    4887 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:46:54.781155    4887 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-293000" does not appear in /Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:46:54.781250    4887 kubeconfig.go:62] /Users/jenkins/minikube-integration/19662-1312/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-293000" cluster setting kubeconfig missing "stopped-upgrade-293000" context setting]
	I0917 10:46:54.781474    4887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/kubeconfig: {Name:mk31f3a4e5ba5b55f1c245ae17bd3947ee606141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:46:54.781925    4887 kapi.go:59] client config for stopped-upgrade-293000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10421d800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:46:54.782261    4887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 10:46:54.784862    4887 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-293000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0917 10:46:54.784869    4887 kubeadm.go:1160] stopping kube-system containers ...
	I0917 10:46:54.784922    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:46:54.795348    4887 docker.go:483] Stopping containers: [06f0615ccfda 7d102603a586 98b0c48c9735 4dabcabdd1a5 185cd67f41ca 8865fe51a3f3 e9458d99309c b0315bdc1db3]
	I0917 10:46:54.795431    4887 ssh_runner.go:195] Run: docker stop 06f0615ccfda 7d102603a586 98b0c48c9735 4dabcabdd1a5 185cd67f41ca 8865fe51a3f3 e9458d99309c b0315bdc1db3
	I0917 10:46:54.806495    4887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 10:46:54.812127    4887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 10:46:54.815362    4887 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 10:46:54.815368    4887 kubeadm.go:157] found existing configuration files:
	
	I0917 10:46:54.815396    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/admin.conf
	I0917 10:46:54.817957    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 10:46:54.817984    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 10:46:54.820780    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/kubelet.conf
	I0917 10:46:54.823764    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 10:46:54.823788    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 10:46:54.826520    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/controller-manager.conf
	I0917 10:46:54.828948    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 10:46:54.828973    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 10:46:54.832142    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/scheduler.conf
	I0917 10:46:54.835229    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 10:46:54.835261    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 10:46:54.837980    4887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 10:46:54.840806    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:46:54.863550    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:46:55.346578    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:46:55.472938    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:46:55.496660    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 10:46:55.520961    4887 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:46:55.521045    4887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:46:56.023343    4887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:46:56.523087    4887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:46:56.528093    4887 api_server.go:72] duration metric: took 1.007164167s to wait for apiserver process to appear ...
	I0917 10:46:56.528105    4887 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:46:56.528115    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:01.530081    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:01.530108    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:06.530465    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:06.530489    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:11.530763    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:11.530818    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:16.531463    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:16.531541    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:21.532310    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:21.532385    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:26.533796    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:26.533898    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:31.535542    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:31.535563    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:36.537075    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:36.537097    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:41.539146    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:41.539184    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:46.540553    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:46.540588    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:51.542740    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:51.542795    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:47:56.543370    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:47:56.543584    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:47:56.557859    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:47:56.557958    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:47:56.572358    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:47:56.572459    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:47:56.582931    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:47:56.583017    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:47:56.592672    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:47:56.592760    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:47:56.603072    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:47:56.603154    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:47:56.617800    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:47:56.617878    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:47:56.628439    4887 logs.go:276] 0 containers: []
	W0917 10:47:56.628453    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:47:56.628519    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:47:56.640383    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:47:56.640400    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:47:56.640406    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:47:56.653058    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:47:56.653070    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:47:56.664010    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:47:56.664023    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:47:56.770516    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:47:56.770528    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:47:56.797534    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:47:56.797546    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:47:56.809743    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:47:56.809754    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:47:56.835380    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:47:56.835388    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:47:56.847015    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:47:56.847032    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:47:56.851225    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:47:56.851231    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:47:56.866972    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:47:56.866986    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:47:56.883857    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:47:56.883870    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:47:56.903139    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:47:56.903152    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:47:56.914951    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:47:56.914963    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:47:56.954379    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:47:56.954388    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:47:56.969830    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:47:56.969843    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:47:56.981439    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:47:56.981453    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:47:56.993752    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:47:56.993763    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:47:59.513431    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:04.515535    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:04.515712    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:04.531729    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:04.531809    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:04.547948    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:04.548022    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:04.558187    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:04.558270    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:04.568991    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:04.569076    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:04.579269    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:04.579355    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:04.590110    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:04.590196    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:04.600547    4887 logs.go:276] 0 containers: []
	W0917 10:48:04.600560    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:04.600636    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:04.611692    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:04.611710    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:04.611716    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:04.649296    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:04.649311    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:04.663593    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:04.663602    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:04.679155    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:04.679168    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:04.691223    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:04.691233    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:04.708273    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:04.708284    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:04.723042    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:04.723053    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:04.737352    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:04.737362    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:04.741934    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:04.741940    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:04.754368    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:04.754378    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:04.766278    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:04.766293    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:04.789880    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:04.789887    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:04.827350    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:04.827356    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:04.841948    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:04.841959    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:04.867114    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:04.867125    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:04.879026    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:04.879036    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:04.896888    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:04.896897    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:07.410657    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:12.413116    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:12.413432    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:12.437443    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:12.437563    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:12.453238    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:12.453330    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:12.467255    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:12.467337    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:12.477612    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:12.477694    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:12.488157    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:12.488237    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:12.499176    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:12.499259    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:12.509746    4887 logs.go:276] 0 containers: []
	W0917 10:48:12.509759    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:12.509833    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:12.520522    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:12.520541    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:12.520546    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:12.536988    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:12.537003    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:12.551132    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:12.551141    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:12.562743    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:12.562754    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:12.597777    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:12.597788    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:12.622738    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:12.622754    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:12.635454    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:12.635466    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:12.672921    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:12.672931    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:12.677552    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:12.677558    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:12.694632    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:12.694646    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:12.707137    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:12.707147    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:12.732033    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:12.732045    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:12.745591    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:12.745604    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:12.760241    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:12.760254    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:12.771257    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:12.771273    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:12.788115    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:12.788125    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:12.801281    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:12.801293    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:15.315905    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:20.318043    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:20.318165    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:20.330576    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:20.330660    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:20.341008    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:20.341095    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:20.351427    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:20.351510    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:20.361976    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:20.362067    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:20.374323    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:20.374403    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:20.384712    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:20.384805    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:20.394564    4887 logs.go:276] 0 containers: []
	W0917 10:48:20.394577    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:20.394646    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:20.405004    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:20.405023    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:20.405029    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:20.417339    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:20.417349    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:20.431676    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:20.431691    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:20.449096    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:20.449105    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:20.465335    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:20.465350    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:20.477352    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:20.477362    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:20.488730    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:20.488741    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:20.512746    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:20.512756    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:20.524298    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:20.524310    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:20.536483    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:20.536494    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:20.573299    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:20.573306    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:20.613455    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:20.613468    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:20.627484    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:20.627494    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:20.653250    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:20.653263    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:20.667523    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:20.667539    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:20.671808    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:20.671814    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:20.683050    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:20.683060    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:23.197226    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:28.199348    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:28.199518    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:28.211961    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:28.212054    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:28.223148    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:28.223236    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:28.233927    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:28.234008    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:28.244285    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:28.244374    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:28.262875    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:28.262956    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:28.273247    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:28.273321    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:28.284109    4887 logs.go:276] 0 containers: []
	W0917 10:48:28.284121    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:28.284193    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:28.294592    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:28.294607    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:28.294612    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:28.308940    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:28.308955    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:28.344002    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:28.344013    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:28.359086    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:28.359096    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:28.386910    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:28.386925    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:28.400065    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:28.400082    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:28.438887    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:28.438897    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:28.443184    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:28.443192    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:28.455492    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:28.455503    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:28.474319    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:28.474329    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:28.499393    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:28.499403    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:28.510915    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:28.510927    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:28.522364    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:28.522376    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:28.533691    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:28.533702    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:28.545482    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:28.545496    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:28.556750    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:28.556760    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:28.570936    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:28.570947    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:31.086940    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:36.089223    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:36.089453    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:36.106738    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:36.106850    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:36.119877    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:36.119968    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:36.130886    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:36.130962    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:36.143421    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:36.143496    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:36.153786    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:36.153858    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:36.165149    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:36.165223    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:36.175517    4887 logs.go:276] 0 containers: []
	W0917 10:48:36.175528    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:36.175599    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:36.185868    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:36.185888    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:36.185893    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:36.199632    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:36.199647    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:36.212085    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:36.212096    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:36.250293    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:36.250302    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:36.276559    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:36.276571    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:36.291677    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:36.291689    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:36.305586    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:36.305602    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:36.330522    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:36.330530    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:36.349242    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:36.349258    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:36.363644    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:36.363660    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:36.375483    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:36.375492    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:36.411589    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:36.411599    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:36.432344    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:36.432355    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:36.443910    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:36.443926    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:36.454925    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:36.454938    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:36.459470    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:36.459476    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:36.473564    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:36.473576    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:38.992059    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:43.993736    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:43.994045    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:44.016987    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:44.017129    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:44.034142    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:44.034238    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:44.049106    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:44.049193    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:44.063760    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:44.063842    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:44.074135    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:44.074223    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:44.084312    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:44.084385    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:44.094533    4887 logs.go:276] 0 containers: []
	W0917 10:48:44.094545    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:44.094618    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:44.105431    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:44.105447    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:44.105452    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:44.130702    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:44.130714    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:44.144783    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:44.144796    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:44.157128    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:44.157141    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:44.180397    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:44.180408    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:44.220203    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:44.220212    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:44.224714    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:44.224723    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:44.239354    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:44.239364    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:44.251069    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:44.251079    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:44.262123    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:44.262132    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:44.298025    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:44.298037    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:44.312498    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:44.312509    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:44.330981    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:44.330993    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:44.349312    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:44.349331    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:44.367874    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:44.367889    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:44.379263    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:44.379274    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:44.390988    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:44.391000    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:46.906087    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:51.908326    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:51.908547    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:51.930612    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:51.930735    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:51.946114    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:51.946205    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:51.959043    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:51.959129    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:51.969924    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:51.970013    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:51.980796    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:51.980871    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:51.991469    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:51.991552    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:52.001777    4887 logs.go:276] 0 containers: []
	W0917 10:48:52.001791    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:52.001856    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:52.016664    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:52.016685    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:52.016691    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:52.033957    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:48:52.033972    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:48:52.044970    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:48:52.044985    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:48:52.061530    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:48:52.061542    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:48:52.073498    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:48:52.073508    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:48:52.077547    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:48:52.077559    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:48:52.091937    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:52.091947    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:52.117787    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:52.117802    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:48:52.157100    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:48:52.157148    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:48:52.172067    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:48:52.172081    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:48:52.183558    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:52.183570    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:52.195511    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:48:52.195521    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:48:52.231405    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:48:52.231418    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:48:52.243611    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:52.243624    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:52.255576    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:48:52.255588    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:48:52.273219    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:52.273233    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:52.284938    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:48:52.284953    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:48:54.811666    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:48:59.813826    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:48:59.813990    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:48:59.833650    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:48:59.833745    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:48:59.845899    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:48:59.846012    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:48:59.856394    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:48:59.856478    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:48:59.866792    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:48:59.866875    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:48:59.877169    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:48:59.877248    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:48:59.887653    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:48:59.887732    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:48:59.897929    4887 logs.go:276] 0 containers: []
	W0917 10:48:59.897943    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:48:59.898003    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:48:59.908512    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:48:59.908541    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:48:59.908547    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:48:59.936200    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:48:59.936215    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:48:59.950970    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:48:59.950979    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:48:59.962664    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:48:59.962674    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:48:59.975698    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:48:59.975709    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:48:59.987981    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:48:59.987992    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:00.027808    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:00.027826    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:00.039181    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:00.039194    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:00.064214    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:00.064222    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:00.068662    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:00.068671    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:00.082290    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:00.082299    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:00.096801    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:00.096812    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:00.114020    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:00.114029    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:00.125919    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:00.125929    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:00.137187    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:00.137197    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:00.150944    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:00.150959    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:00.162926    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:00.162936    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:02.702359    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:07.704462    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:07.704649    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:07.717600    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:07.717698    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:07.728313    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:07.728394    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:07.738623    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:07.738707    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:07.748915    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:07.749001    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:07.759606    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:07.759687    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:07.770192    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:07.770276    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:07.786546    4887 logs.go:276] 0 containers: []
	W0917 10:49:07.786558    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:07.786627    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:07.796695    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:07.796712    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:07.796717    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:07.819992    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:07.820000    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:07.837209    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:07.837221    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:07.862412    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:07.862422    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:07.877216    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:07.877226    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:07.891985    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:07.891994    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:07.929832    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:07.929840    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:07.943301    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:07.943312    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:07.955352    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:07.955368    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:07.967507    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:07.967517    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:08.005475    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:08.005485    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:08.020494    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:08.020505    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:08.032115    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:08.032128    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:08.043610    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:08.043620    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:08.055485    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:08.055501    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:08.067507    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:08.067532    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:08.079165    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:08.079178    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:10.586077    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:15.588196    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:15.588377    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:15.599685    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:15.599777    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:15.610381    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:15.610472    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:15.626114    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:15.626201    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:15.636833    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:15.636921    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:15.647283    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:15.647365    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:15.659310    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:15.659394    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:15.669539    4887 logs.go:276] 0 containers: []
	W0917 10:49:15.669553    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:15.669625    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:15.680384    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:15.680401    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:15.680406    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:15.705648    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:15.705664    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:15.717322    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:15.717334    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:15.729505    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:15.729527    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:15.754145    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:15.754153    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:15.766168    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:15.766178    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:15.804351    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:15.804379    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:15.808654    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:15.808660    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:15.831710    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:15.831721    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:15.845543    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:15.845554    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:15.856451    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:15.856463    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:15.870628    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:15.870642    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:15.883730    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:15.883741    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:15.894842    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:15.894852    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:15.929579    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:15.929595    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:15.944296    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:15.944307    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:15.958535    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:15.958545    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:18.471987    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:23.473238    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:23.473406    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:23.489646    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:23.489732    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:23.500158    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:23.500240    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:23.510761    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:23.510843    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:23.521698    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:23.521776    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:23.536425    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:23.536504    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:23.546862    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:23.546945    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:23.557657    4887 logs.go:276] 0 containers: []
	W0917 10:49:23.557667    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:23.557728    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:23.568284    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:23.568301    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:23.568306    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:23.582990    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:23.583001    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:23.595797    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:23.595809    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:23.620215    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:23.620226    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:23.634803    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:23.634817    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:23.646400    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:23.646413    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:23.670992    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:23.670999    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:23.710970    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:23.710979    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:23.728657    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:23.728670    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:23.740316    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:23.740326    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:23.751630    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:23.751643    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:23.765302    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:23.765315    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:23.777120    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:23.777135    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:23.781272    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:23.781279    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:23.802401    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:23.802413    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:23.816307    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:23.816318    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:23.834462    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:23.834473    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:26.372262    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:31.374492    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:31.374890    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:31.402988    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:31.403139    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:31.420871    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:31.420979    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:31.434335    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:31.434422    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:31.445579    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:31.445663    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:31.455839    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:31.455918    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:31.466557    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:31.466630    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:31.478576    4887 logs.go:276] 0 containers: []
	W0917 10:49:31.478590    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:31.478669    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:31.490125    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:31.490150    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:31.490156    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:31.505416    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:31.505426    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:31.517178    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:31.517187    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:31.531990    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:31.531998    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:31.543660    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:31.543670    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:31.558633    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:31.558644    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:31.571004    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:31.571019    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:31.611028    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:31.611038    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:31.615756    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:31.615766    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:31.659683    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:31.659693    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:31.673999    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:31.674011    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:31.685950    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:31.685965    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:31.698471    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:31.698482    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:31.715506    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:31.715518    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:31.738350    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:31.738358    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:31.766296    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:31.766325    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:31.780201    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:31.780212    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:34.300035    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:39.302179    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:39.302430    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:39.329678    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:39.329787    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:39.342948    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:39.343030    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:39.357186    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:39.357270    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:39.367697    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:39.367781    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:39.378298    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:39.378375    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:39.388919    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:39.388997    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:39.398739    4887 logs.go:276] 0 containers: []
	W0917 10:49:39.398750    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:39.398817    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:39.408976    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:39.408993    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:39.409000    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:39.413238    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:39.413247    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:39.434334    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:39.434344    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:39.445609    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:39.445619    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:39.456625    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:39.456633    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:39.468416    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:39.468428    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:39.485850    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:39.485859    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:39.498137    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:39.498148    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:39.515691    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:39.515705    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:39.527134    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:39.527146    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:39.550504    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:39.550515    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:39.587537    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:39.587546    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:39.633543    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:39.633559    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:39.658757    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:39.658772    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:39.673754    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:39.673764    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:39.685004    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:39.685017    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:39.708478    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:39.708489    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:42.228639    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:47.230848    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:47.231014    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:47.245218    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:47.245304    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:47.256764    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:47.256847    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:47.266827    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:47.266904    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:47.278633    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:47.278714    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:47.288655    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:47.288733    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:47.299788    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:47.299866    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:47.310563    4887 logs.go:276] 0 containers: []
	W0917 10:49:47.310575    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:47.310644    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:47.321103    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:47.321122    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:47.321127    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:47.335429    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:47.335439    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:47.353006    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:47.353015    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:47.364861    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:47.364873    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:47.403346    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:47.403357    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:47.438303    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:47.438317    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:47.442508    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:47.442514    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:47.453802    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:47.453813    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:47.465583    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:47.465595    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:47.479205    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:47.479216    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:47.490300    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:47.490313    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:47.514845    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:47.514855    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:47.529795    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:47.529806    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:47.541638    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:47.541650    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:47.553668    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:47.553679    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:47.577854    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:47.577862    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:47.591793    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:47.591804    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:50.107916    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:49:55.109958    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:49:55.110108    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:49:55.121101    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:49:55.121194    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:49:55.131376    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:49:55.131466    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:49:55.141774    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:49:55.141853    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:49:55.152324    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:49:55.152407    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:49:55.167061    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:49:55.167147    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:49:55.177734    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:49:55.177808    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:49:55.187448    4887 logs.go:276] 0 containers: []
	W0917 10:49:55.187465    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:49:55.187534    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:49:55.198252    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:49:55.198268    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:49:55.198273    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:49:55.209603    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:49:55.209615    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:49:55.226925    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:49:55.226935    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:49:55.250558    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:49:55.250566    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:49:55.254883    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:49:55.254889    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:49:55.290348    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:49:55.290363    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:49:55.305393    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:49:55.305405    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:49:55.319096    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:49:55.319113    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:49:55.330751    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:49:55.330767    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:49:55.369856    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:49:55.369866    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:49:55.381446    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:49:55.381457    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:49:55.397346    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:49:55.397358    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:49:55.410630    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:49:55.410640    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:49:55.436190    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:49:55.436202    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:49:55.450245    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:49:55.450257    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:49:55.464836    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:49:55.464848    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:49:55.483458    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:49:55.483469    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:49:57.998388    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:03.000612    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:03.000798    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:03.015538    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:03.015627    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:03.028088    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:03.028160    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:03.041303    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:03.041386    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:03.051759    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:03.051842    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:03.061814    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:03.061891    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:03.076728    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:03.076809    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:03.087365    4887 logs.go:276] 0 containers: []
	W0917 10:50:03.087377    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:03.087446    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:03.101290    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:03.101312    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:03.101318    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:03.113689    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:03.113702    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:03.152726    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:03.152738    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:03.167439    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:03.167456    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:03.182041    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:03.182052    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:03.196095    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:03.196111    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:03.220480    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:03.220488    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:03.224507    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:03.224519    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:03.249515    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:03.249526    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:03.286756    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:03.286771    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:03.300750    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:03.300763    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:03.312926    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:03.312936    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:03.324240    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:03.324253    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:03.338761    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:03.338772    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:03.350793    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:03.350804    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:03.368291    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:03.368301    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:03.380422    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:03.380433    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:05.894428    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:10.896526    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:10.896707    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:10.915242    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:10.915350    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:10.929207    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:10.929300    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:10.940692    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:10.940775    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:10.951284    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:10.951369    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:10.965693    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:10.965778    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:10.976131    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:10.976205    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:10.986819    4887 logs.go:276] 0 containers: []
	W0917 10:50:10.986831    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:10.986898    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:10.997449    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:10.997467    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:10.997473    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:11.012185    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:11.012200    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:11.028141    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:11.028152    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:11.039776    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:11.039787    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:11.051234    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:11.051247    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:11.090264    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:11.090272    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:11.112980    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:11.112992    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:11.127711    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:11.127726    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:11.141025    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:11.141040    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:11.154341    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:11.154352    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:11.179545    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:11.179557    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:11.190785    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:11.190793    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:11.201889    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:11.201900    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:11.237128    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:11.237140    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:11.255085    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:11.255100    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:11.278761    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:11.278769    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:11.290976    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:11.290986    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:13.797657    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:18.799924    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:18.800199    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:18.825048    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:18.825184    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:18.841122    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:18.841221    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:18.854397    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:18.854481    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:18.865833    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:18.865904    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:18.876611    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:18.876696    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:18.892435    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:18.892520    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:18.906670    4887 logs.go:276] 0 containers: []
	W0917 10:50:18.906685    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:18.906756    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:18.925723    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:18.925742    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:18.925748    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:18.947473    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:18.947485    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:18.959380    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:18.959391    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:18.971803    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:18.971813    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:18.995328    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:18.995336    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:19.032823    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:19.032833    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:19.046591    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:19.046603    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:19.071307    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:19.071319    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:19.085801    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:19.085810    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:19.103946    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:19.103957    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:19.116185    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:19.116196    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:19.127946    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:19.127958    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:19.140357    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:19.140368    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:19.152370    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:19.152382    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:19.188934    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:19.188943    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:19.192914    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:19.192921    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:19.207614    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:19.207627    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:21.724835    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:26.727052    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:26.727394    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:26.747957    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:26.748067    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:26.761769    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:26.761857    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:26.773673    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:26.773753    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:26.786199    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:26.786288    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:26.796901    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:26.796967    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:26.807645    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:26.807731    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:26.827908    4887 logs.go:276] 0 containers: []
	W0917 10:50:26.827920    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:26.827993    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:26.838462    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:26.838486    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:26.838491    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:26.849614    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:26.849625    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:26.861264    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:26.861273    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:26.900097    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:26.900109    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:26.935115    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:26.935129    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:26.952957    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:26.952973    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:26.978967    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:26.978990    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:26.996555    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:26.996576    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:27.011239    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:27.011254    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:27.015826    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:27.015835    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:27.033696    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:27.033707    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:27.051895    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:27.051906    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:27.063644    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:27.063655    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:27.074671    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:27.074683    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:27.086758    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:27.086774    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:27.101380    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:27.101393    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:27.125259    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:27.125268    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:29.640811    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:34.642093    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:34.642306    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:34.659325    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:34.659428    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:34.672953    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:34.673047    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:34.684685    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:34.684771    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:34.696040    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:34.696120    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:34.706614    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:34.706695    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:34.717223    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:34.717296    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:34.728784    4887 logs.go:276] 0 containers: []
	W0917 10:50:34.728796    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:34.728871    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:34.739408    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:34.739426    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:34.739432    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:34.756304    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:34.756320    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:34.781042    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:34.781053    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:34.792084    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:34.792095    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:34.804227    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:34.804243    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:34.815793    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:34.815807    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:34.819801    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:34.819810    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:34.854570    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:34.854584    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:34.866655    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:34.866670    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:34.907830    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:34.907843    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:34.921969    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:34.921979    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:34.936427    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:34.936436    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:34.947614    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:34.947627    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:34.962370    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:34.962380    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:34.974420    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:34.974431    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:34.986146    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:34.986157    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:35.009187    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:35.009204    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:37.532607    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:42.534821    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:42.535244    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:42.573455    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:42.573587    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:42.588894    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:42.588984    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:42.601742    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:42.601832    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:42.613370    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:42.613456    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:42.627920    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:42.628003    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:42.638132    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:42.638207    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:42.648810    4887 logs.go:276] 0 containers: []
	W0917 10:50:42.648829    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:42.648924    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:42.660164    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:42.660185    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:42.660191    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:42.671960    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:42.671970    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:42.697603    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:42.697617    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:42.715675    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:42.715690    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:42.733853    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:42.733867    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:42.746233    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:42.746242    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:42.758232    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:42.758241    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:42.769993    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:42.770002    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:42.781949    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:42.781958    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:42.800020    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:42.800034    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:42.838120    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:42.838129    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:42.842703    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:42.842709    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:42.877029    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:42.877040    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:42.890880    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:42.890888    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:42.910704    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:42.910716    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:42.922824    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:42.922836    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:42.934667    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:42.934678    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:45.461662    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:50.462521    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:50.462756    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:50:50.476593    4887 logs.go:276] 2 containers: [fe20304b4a78 185cd67f41ca]
	I0917 10:50:50.476709    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:50:50.487908    4887 logs.go:276] 2 containers: [ee73142452a3 98b0c48c9735]
	I0917 10:50:50.487986    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:50:50.498526    4887 logs.go:276] 1 containers: [b4b1cb12d6f7]
	I0917 10:50:50.498603    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:50:50.512771    4887 logs.go:276] 2 containers: [35bf7ad314bf 4dabcabdd1a5]
	I0917 10:50:50.512860    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:50:50.522904    4887 logs.go:276] 1 containers: [e0177a3f9729]
	I0917 10:50:50.522990    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:50:50.533722    4887 logs.go:276] 2 containers: [8e22878b9f05 06f0615ccfda]
	I0917 10:50:50.533806    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:50:50.544060    4887 logs.go:276] 0 containers: []
	W0917 10:50:50.544071    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:50:50.544138    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:50:50.554597    4887 logs.go:276] 2 containers: [78c4c3524d72 9dfa9e157626]
	I0917 10:50:50.554613    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:50:50.554619    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:50:50.566965    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:50:50.566981    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:50:50.571133    4887 logs.go:123] Gathering logs for kube-proxy [e0177a3f9729] ...
	I0917 10:50:50.571141    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0177a3f9729"
	I0917 10:50:50.582720    4887 logs.go:123] Gathering logs for kube-controller-manager [06f0615ccfda] ...
	I0917 10:50:50.582730    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f0615ccfda"
	I0917 10:50:50.595055    4887 logs.go:123] Gathering logs for storage-provisioner [9dfa9e157626] ...
	I0917 10:50:50.595066    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfa9e157626"
	I0917 10:50:50.606650    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:50:50.606663    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:50:50.628400    4887 logs.go:123] Gathering logs for storage-provisioner [78c4c3524d72] ...
	I0917 10:50:50.628408    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c4c3524d72"
	I0917 10:50:50.639633    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:50:50.639647    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:50:50.678494    4887 logs.go:123] Gathering logs for kube-apiserver [fe20304b4a78] ...
	I0917 10:50:50.678503    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe20304b4a78"
	I0917 10:50:50.699743    4887 logs.go:123] Gathering logs for etcd [ee73142452a3] ...
	I0917 10:50:50.699758    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee73142452a3"
	I0917 10:50:50.713221    4887 logs.go:123] Gathering logs for etcd [98b0c48c9735] ...
	I0917 10:50:50.713237    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b0c48c9735"
	I0917 10:50:50.727637    4887 logs.go:123] Gathering logs for kube-scheduler [4dabcabdd1a5] ...
	I0917 10:50:50.727646    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dabcabdd1a5"
	I0917 10:50:50.741900    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:50:50.741915    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:50:50.777653    4887 logs.go:123] Gathering logs for kube-apiserver [185cd67f41ca] ...
	I0917 10:50:50.777663    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185cd67f41ca"
	I0917 10:50:50.802906    4887 logs.go:123] Gathering logs for coredns [b4b1cb12d6f7] ...
	I0917 10:50:50.802918    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b1cb12d6f7"
	I0917 10:50:50.814334    4887 logs.go:123] Gathering logs for kube-scheduler [35bf7ad314bf] ...
	I0917 10:50:50.814347    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35bf7ad314bf"
	I0917 10:50:50.826351    4887 logs.go:123] Gathering logs for kube-controller-manager [8e22878b9f05] ...
	I0917 10:50:50.826362    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e22878b9f05"
	I0917 10:50:53.344691    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:50:58.346850    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:50:58.346927    4887 kubeadm.go:597] duration metric: took 4m3.576523208s to restartPrimaryControlPlane
	W0917 10:50:58.346996    4887 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 10:50:58.347025    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0917 10:50:59.354452    4887 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.007447584s)
	I0917 10:50:59.354535    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:50:59.359659    4887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 10:50:59.362365    4887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 10:50:59.365178    4887 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 10:50:59.365184    4887 kubeadm.go:157] found existing configuration files:
	
	I0917 10:50:59.365209    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/admin.conf
	I0917 10:50:59.367670    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 10:50:59.367694    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 10:50:59.370247    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/kubelet.conf
	I0917 10:50:59.373188    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 10:50:59.373210    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 10:50:59.376237    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/controller-manager.conf
	I0917 10:50:59.378692    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 10:50:59.378718    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 10:50:59.381657    4887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/scheduler.conf
	I0917 10:50:59.384729    4887 kubeadm.go:163] "https://control-plane.minikube.internal:50495" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50495 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 10:50:59.384755    4887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 10:50:59.387364    4887 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 10:50:59.405190    4887 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0917 10:50:59.405228    4887 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 10:50:59.454062    4887 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 10:50:59.454114    4887 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 10:50:59.454156    4887 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 10:50:59.504108    4887 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 10:50:59.509332    4887 out.go:235]   - Generating certificates and keys ...
	I0917 10:50:59.509368    4887 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 10:50:59.509400    4887 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 10:50:59.509467    4887 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 10:50:59.509545    4887 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 10:50:59.509611    4887 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 10:50:59.509669    4887 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 10:50:59.509751    4887 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 10:50:59.509798    4887 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 10:50:59.509852    4887 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 10:50:59.509908    4887 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 10:50:59.509974    4887 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 10:50:59.510020    4887 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 10:50:59.592095    4887 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 10:50:59.669100    4887 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 10:50:59.762830    4887 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 10:50:59.795626    4887 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 10:50:59.829048    4887 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 10:50:59.829422    4887 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 10:50:59.829451    4887 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 10:50:59.916953    4887 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 10:50:59.920812    4887 out.go:235]   - Booting up control plane ...
	I0917 10:50:59.920873    4887 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 10:50:59.923044    4887 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 10:50:59.923139    4887 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 10:50:59.923293    4887 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 10:50:59.923390    4887 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 10:51:04.421387    4887 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501986 seconds
	I0917 10:51:04.421446    4887 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 10:51:04.424750    4887 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 10:51:04.939312    4887 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 10:51:04.939571    4887 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-293000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 10:51:05.442792    4887 kubeadm.go:310] [bootstrap-token] Using token: 4qi2qg.9x5j38z4v8y3lhdh
	I0917 10:51:05.448491    4887 out.go:235]   - Configuring RBAC rules ...
	I0917 10:51:05.448558    4887 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 10:51:05.448601    4887 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 10:51:05.454239    4887 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 10:51:05.455100    4887 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 10:51:05.456091    4887 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 10:51:05.458259    4887 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 10:51:05.461675    4887 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 10:51:05.628769    4887 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 10:51:05.849224    4887 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 10:51:05.849967    4887 kubeadm.go:310] 
	I0917 10:51:05.850000    4887 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 10:51:05.850003    4887 kubeadm.go:310] 
	I0917 10:51:05.850041    4887 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 10:51:05.850044    4887 kubeadm.go:310] 
	I0917 10:51:05.850058    4887 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 10:51:05.850100    4887 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 10:51:05.850141    4887 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 10:51:05.850146    4887 kubeadm.go:310] 
	I0917 10:51:05.850173    4887 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 10:51:05.850177    4887 kubeadm.go:310] 
	I0917 10:51:05.850211    4887 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 10:51:05.850216    4887 kubeadm.go:310] 
	I0917 10:51:05.850242    4887 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 10:51:05.850296    4887 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 10:51:05.850359    4887 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 10:51:05.850362    4887 kubeadm.go:310] 
	I0917 10:51:05.850409    4887 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 10:51:05.850450    4887 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 10:51:05.850453    4887 kubeadm.go:310] 
	I0917 10:51:05.850497    4887 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4qi2qg.9x5j38z4v8y3lhdh \
	I0917 10:51:05.850558    4887 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:36041a92e029834f33dc421547a4417b75c39ebfd82ce914924ecffa9817b69d \
	I0917 10:51:05.850570    4887 kubeadm.go:310] 	--control-plane 
	I0917 10:51:05.850573    4887 kubeadm.go:310] 
	I0917 10:51:05.850614    4887 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 10:51:05.850617    4887 kubeadm.go:310] 
	I0917 10:51:05.850656    4887 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4qi2qg.9x5j38z4v8y3lhdh \
	I0917 10:51:05.850712    4887 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:36041a92e029834f33dc421547a4417b75c39ebfd82ce914924ecffa9817b69d 
	I0917 10:51:05.850842    4887 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 10:51:05.850857    4887 cni.go:84] Creating CNI manager for ""
	I0917 10:51:05.850868    4887 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:51:05.855074    4887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 10:51:05.863962    4887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 10:51:05.867112    4887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 10:51:05.872351    4887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 10:51:05.872410    4887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 10:51:05.872416    4887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-293000 minikube.k8s.io/updated_at=2024_09_17T10_51_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=stopped-upgrade-293000 minikube.k8s.io/primary=true
	I0917 10:51:05.917297    4887 ops.go:34] apiserver oom_adj: -16
	I0917 10:51:05.917313    4887 kubeadm.go:1113] duration metric: took 44.955416ms to wait for elevateKubeSystemPrivileges
	I0917 10:51:05.917322    4887 kubeadm.go:394] duration metric: took 4m11.16067075s to StartCluster
	I0917 10:51:05.917332    4887 settings.go:142] acquiring lock: {Name:mk01dda79792b7eaa96d8ee72bfae59b39d5fab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:51:05.917420    4887 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:51:05.917819    4887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/kubeconfig: {Name:mk31f3a4e5ba5b55f1c245ae17bd3947ee606141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:51:05.918021    4887 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:51:05.918060    4887 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 10:51:05.918103    4887 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-293000"
	I0917 10:51:05.918111    4887 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-293000"
	W0917 10:51:05.918115    4887 addons.go:243] addon storage-provisioner should already be in state true
	I0917 10:51:05.918128    4887 host.go:66] Checking if "stopped-upgrade-293000" exists ...
	I0917 10:51:05.918153    4887 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-293000"
	I0917 10:51:05.918192    4887 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:51:05.918201    4887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-293000"
	I0917 10:51:05.919337    4887 kapi.go:59] client config for stopped-upgrade-293000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/stopped-upgrade-293000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10421d800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:51:05.919458    4887 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-293000"
	W0917 10:51:05.919463    4887 addons.go:243] addon default-storageclass should already be in state true
	I0917 10:51:05.919469    4887 host.go:66] Checking if "stopped-upgrade-293000" exists ...
	I0917 10:51:05.921981    4887 out.go:177] * Verifying Kubernetes components...
	I0917 10:51:05.922276    4887 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 10:51:05.923265    4887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 10:51:05.923281    4887 sshutil.go:53] new ssh client: &{IP:localhost Port:50461 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/id_rsa Username:docker}
	I0917 10:51:05.925935    4887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 10:51:05.929986    4887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:51:05.933973    4887 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 10:51:05.933980    4887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 10:51:05.933986    4887 sshutil.go:53] new ssh client: &{IP:localhost Port:50461 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/stopped-upgrade-293000/id_rsa Username:docker}
	I0917 10:51:06.022985    4887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:51:06.028369    4887 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:51:06.028414    4887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:51:06.032170    4887 api_server.go:72] duration metric: took 114.141166ms to wait for apiserver process to appear ...
	I0917 10:51:06.032178    4887 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:51:06.032184    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:06.053983    4887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 10:51:06.078400    4887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 10:51:06.426396    4887 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 10:51:06.426408    4887 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 10:51:11.034117    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:11.034154    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:16.034648    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:16.034698    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:21.034987    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:21.035022    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:26.035504    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:26.035551    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:31.036304    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:31.036354    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:36.037228    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:36.037279    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0917 10:51:36.427725    4887 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0917 10:51:36.431899    4887 out.go:177] * Enabled addons: storage-provisioner
	I0917 10:51:36.439808    4887 addons.go:510] duration metric: took 30.522691042s for enable addons: enabled=[storage-provisioner]
	I0917 10:51:41.038510    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:41.038558    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:46.040172    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:46.040252    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:51.042598    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:51.042655    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:51:56.043528    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:51:56.043558    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:52:01.045608    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:52:01.045653    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:52:06.046556    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:52:06.046751    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:52:06.058075    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:52:06.058160    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:52:06.068723    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:52:06.068809    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:52:06.079494    4887 logs.go:276] 2 containers: [9a194630c6b2 3055fef16936]
	I0917 10:52:06.079576    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:52:06.089800    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:52:06.089859    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:52:06.100343    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:52:06.100429    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:52:06.113585    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:52:06.113669    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:52:06.124222    4887 logs.go:276] 0 containers: []
	W0917 10:52:06.124233    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:52:06.124300    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:52:06.134956    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:52:06.134972    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:52:06.134977    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:52:06.150344    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:52:06.150357    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:52:06.186193    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:52:06.186204    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:52:06.202759    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:52:06.202768    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:52:06.214026    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:52:06.214036    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:52:06.225512    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:52:06.225526    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:52:06.243299    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:52:06.243310    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:52:06.254690    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:52:06.254706    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:52:06.279367    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:52:06.279377    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:52:06.290370    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:52:06.290381    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:52:06.295193    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:52:06.295200    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:52:06.332756    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:52:06.332766    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:52:06.347783    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:52:06.347798    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:52:08.861613    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:52:13.863995    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:52:13.864185    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:52:13.880310    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:52:13.880397    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:52:13.894649    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:52:13.894738    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:52:13.905230    4887 logs.go:276] 2 containers: [9a194630c6b2 3055fef16936]
	I0917 10:52:13.905311    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:52:13.915346    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:52:13.915420    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:52:13.925722    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:52:13.925798    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:52:13.935957    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:52:13.936024    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:52:13.947029    4887 logs.go:276] 0 containers: []
	W0917 10:52:13.947040    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:52:13.947098    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:52:13.957049    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:52:13.957064    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:52:13.957069    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:52:13.974710    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:52:13.974721    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:52:13.986479    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:52:13.986493    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:52:13.998131    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:52:13.998141    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:52:14.010489    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:52:14.010502    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:52:14.033861    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:52:14.033868    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:52:14.045493    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:52:14.045505    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:52:14.078814    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:52:14.078822    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:52:14.112322    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:52:14.112337    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:52:14.126128    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:52:14.126138    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:52:14.137358    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:52:14.137368    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:52:14.155410    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:52:14.155419    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:52:14.172195    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:52:14.172206    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:52:16.678586    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:52:21.681000    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:52:21.681247    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:52:21.703693    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:52:21.703825    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:52:21.725169    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:52:21.725263    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:52:21.736916    4887 logs.go:276] 2 containers: [9a194630c6b2 3055fef16936]
	I0917 10:52:21.736994    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:52:21.747110    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:52:21.747201    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:52:21.770704    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:52:21.770792    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:52:21.781083    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:52:21.781154    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:52:21.790688    4887 logs.go:276] 0 containers: []
	W0917 10:52:21.790698    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:52:21.790758    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:52:21.801349    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:52:21.801364    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:52:21.801369    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:52:21.812966    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:52:21.812980    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:52:21.824466    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:52:21.824480    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:52:21.857802    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:52:21.857814    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:52:21.893276    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:52:21.893290    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:52:21.904751    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:52:21.904764    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:52:21.921213    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:52:21.921229    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:52:21.938443    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:52:21.938459    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:52:21.962048    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:52:21.962058    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:52:21.966816    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:52:21.966824    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:52:21.980944    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:52:21.980952    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:52:21.999061    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:52:21.999070    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:52:22.010377    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:52:22.010394    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:52:24.523968    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:52:29.526750    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:52:29.527217    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:52:29.567273    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:52:29.567426    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:52:29.588987    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:52:29.589118    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:52:29.605510    4887 logs.go:276] 2 containers: [9a194630c6b2 3055fef16936]
	I0917 10:52:29.605589    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:52:29.618380    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:52:29.618460    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:52:29.629066    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:52:29.629153    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:52:29.639854    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:52:29.639925    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:52:29.650056    4887 logs.go:276] 0 containers: []
	W0917 10:52:29.650068    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:52:29.650136    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:52:29.661790    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:52:29.661806    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:52:29.661812    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:52:29.699553    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:52:29.699567    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:52:29.716904    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:52:29.716917    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:52:29.730039    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:52:29.730050    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:52:29.745184    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:52:29.745194    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:52:29.757914    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:52:29.757928    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:52:29.778109    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:52:29.778127    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:52:29.815325    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:52:29.815345    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:52:29.820504    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:52:29.820519    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:52:29.834109    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:52:29.834131    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:52:29.847173    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:52:29.847184    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:52:29.871888    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:52:29.871915    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:52:29.888016    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:52:29.888037    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:52:32.403909    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:52:37.405978    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:52:37.406170    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:52:37.425606    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:52:37.425686    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:52:37.439632    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:52:37.439725    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:52:37.451565    4887 logs.go:276] 2 containers: [9a194630c6b2 3055fef16936]
	I0917 10:52:37.451642    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:52:37.463710    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:52:37.463774    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:52:37.476151    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:52:37.476219    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:52:37.486747    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:52:37.486812    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:52:37.497482    4887 logs.go:276] 0 containers: []
	W0917 10:52:37.497494    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:52:37.497546    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:52:37.508204    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:52:37.508218    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:52:37.508224    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:52:37.522847    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:52:37.522858    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:52:37.534777    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:52:37.534786    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:52:37.553139    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:52:37.553148    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:52:37.564310    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:52:37.564321    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:52:37.587236    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:52:37.587244    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:52:37.599145    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:52:37.599160    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:52:37.613810    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:52:37.613821    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:52:37.618594    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:52:37.618600    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:52:37.652741    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:52:37.652753    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:52:37.666932    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:52:37.666943    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:52:37.678580    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:52:37.678590    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:52:37.691143    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:52:37.691153    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:52:40.228471    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:52:45.231095    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:52:45.231604    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:52:45.272160    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:52:45.272326    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:52:45.293581    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:52:45.293717    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:52:45.308736    4887 logs.go:276] 2 containers: [9a194630c6b2 3055fef16936]
	I0917 10:52:45.308812    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:52:45.320955    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:52:45.321037    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:52:45.331588    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:52:45.331654    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:52:45.342046    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:52:45.342123    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:52:45.352552    4887 logs.go:276] 0 containers: []
	W0917 10:52:45.352562    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:52:45.352621    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:52:45.363333    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:52:45.363346    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:52:45.363351    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:52:45.388199    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:52:45.388207    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:52:45.404138    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:52:45.404149    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:52:45.439393    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:52:45.439400    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:52:45.443543    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:52:45.443548    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:52:45.460189    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:52:45.460202    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:52:45.471702    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:52:45.471713    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:52:45.483297    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:52:45.483307    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:52:45.500069    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:52:45.500078    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:52:45.534239    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:52:45.534253    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:52:45.549565    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:52:45.549574    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:52:45.561479    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:52:45.561491    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:52:45.575997    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:52:45.576007    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:52:48.089142    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:52:53.089662    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:52:53.090048    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:52:53.125049    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:52:53.125193    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:52:53.144765    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:52:53.144856    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:52:53.158819    4887 logs.go:276] 2 containers: [9a194630c6b2 3055fef16936]
	I0917 10:52:53.158911    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:52:53.170629    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:52:53.170709    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:52:53.188340    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:52:53.188422    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:52:53.199365    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:52:53.199452    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:52:53.209642    4887 logs.go:276] 0 containers: []
	W0917 10:52:53.209655    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:52:53.209718    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:52:53.220758    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:52:53.220778    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:52:53.220783    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:52:53.234949    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:52:53.234961    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:52:53.254322    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:52:53.254333    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:52:53.266133    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:52:53.266146    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:52:53.301798    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:52:53.301806    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:52:53.336777    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:52:53.336792    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:52:53.349057    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:52:53.349070    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:52:53.360607    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:52:53.360619    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:52:53.375020    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:52:53.375032    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:52:53.388937    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:52:53.388949    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:52:53.400518    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:52:53.400530    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:52:53.424304    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:52:53.424314    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:52:53.428379    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:52:53.428387    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:52:55.944101    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:53:00.944522    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:53:00.945025    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:53:00.980703    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:53:00.980848    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:53:00.999913    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:53:01.000030    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:53:01.014558    4887 logs.go:276] 2 containers: [9a194630c6b2 3055fef16936]
	I0917 10:53:01.014649    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:53:01.026359    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:53:01.026443    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:53:01.037103    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:53:01.037186    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:53:01.047762    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:53:01.047843    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:53:01.058401    4887 logs.go:276] 0 containers: []
	W0917 10:53:01.058412    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:53:01.058475    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:53:01.069794    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:53:01.069813    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:53:01.069818    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:53:01.081355    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:53:01.081364    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:53:01.092843    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:53:01.092855    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:53:01.128274    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:53:01.128283    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:53:01.140161    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:53:01.140170    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:53:01.152154    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:53:01.152166    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:53:01.167370    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:53:01.167381    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:53:01.179447    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:53:01.179456    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:53:01.201712    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:53:01.201722    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:53:01.226780    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:53:01.226790    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:53:01.230916    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:53:01.230925    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:53:01.266031    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:53:01.266042    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:53:01.280671    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:53:01.280682    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:53:03.797080    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:53:08.799689    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:53:08.799919    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:53:08.819846    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:53:08.819957    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:53:08.834448    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:53:08.834539    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:53:08.847479    4887 logs.go:276] 2 containers: [9a194630c6b2 3055fef16936]
	I0917 10:53:08.847550    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:53:08.858593    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:53:08.858673    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:53:08.870294    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:53:08.870376    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:53:08.883294    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:53:08.883374    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:53:08.893956    4887 logs.go:276] 0 containers: []
	W0917 10:53:08.893972    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:53:08.894047    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:53:08.904809    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:53:08.904824    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:53:08.904829    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:53:08.920297    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:53:08.920308    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:53:08.955023    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:53:08.955029    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:53:08.990159    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:53:08.990170    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:53:09.005503    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:53:09.005518    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:53:09.018086    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:53:09.018096    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:53:09.036559    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:53:09.036574    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:53:09.050002    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:53:09.050020    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:53:09.068062    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:53:09.068070    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:53:09.093347    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:53:09.093355    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:53:09.105271    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:53:09.105282    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:53:09.110063    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:53:09.110069    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:53:09.124935    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:53:09.124943    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:53:11.642169    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:53:16.644860    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:53:16.645233    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:53:16.678473    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:53:16.678610    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:53:16.698901    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:53:16.699020    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:53:16.713751    4887 logs.go:276] 2 containers: [9a194630c6b2 3055fef16936]
	I0917 10:53:16.713839    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:53:16.726355    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:53:16.726430    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:53:16.737122    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:53:16.737202    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:53:16.748241    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:53:16.748328    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:53:16.759014    4887 logs.go:276] 0 containers: []
	W0917 10:53:16.759026    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:53:16.759089    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:53:16.770079    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:53:16.770095    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:53:16.770101    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:53:16.805057    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:53:16.805066    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:53:16.819060    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:53:16.819070    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:53:16.831194    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:53:16.831206    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:53:16.847432    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:53:16.847443    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:53:16.859533    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:53:16.859545    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:53:16.877752    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:53:16.877762    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:53:16.893246    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:53:16.893256    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:53:16.898058    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:53:16.898068    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:53:16.933446    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:53:16.933456    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:53:16.948382    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:53:16.948395    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:53:16.960482    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:53:16.960494    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:53:16.984019    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:53:16.984028    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:53:19.497949    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:53:24.500133    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:53:24.500392    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:53:24.525313    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:53:24.525449    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:53:24.541662    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:53:24.541754    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:53:24.557378    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:53:24.557463    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:53:24.568423    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:53:24.568504    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:53:24.578977    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:53:24.579056    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:53:24.589712    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:53:24.589789    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:53:24.601306    4887 logs.go:276] 0 containers: []
	W0917 10:53:24.601318    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:53:24.601391    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:53:24.612066    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:53:24.612089    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:53:24.612095    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:53:24.623441    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:53:24.623452    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:53:24.635049    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:53:24.635059    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:53:24.646917    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:53:24.646930    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:53:24.664452    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:53:24.664463    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:53:24.697996    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:53:24.698003    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:53:24.733022    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:53:24.733034    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:53:24.747312    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:53:24.747321    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:53:24.759732    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:53:24.759744    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:53:24.783362    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:53:24.783371    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:53:24.787260    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:53:24.787270    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:53:24.799328    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:53:24.799339    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:53:24.822179    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:53:24.822188    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:53:24.834376    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:53:24.834386    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:53:24.846496    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:53:24.846510    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:53:27.369377    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:53:32.371008    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:53:32.371100    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:53:32.383004    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:53:32.383091    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:53:32.395559    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:53:32.395654    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:53:32.407966    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:53:32.408060    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:53:32.419826    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:53:32.419896    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:53:32.438520    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:53:32.438589    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:53:32.451011    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:53:32.451072    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:53:32.462421    4887 logs.go:276] 0 containers: []
	W0917 10:53:32.462433    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:53:32.462492    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:53:32.476423    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:53:32.476437    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:53:32.476443    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:53:32.494770    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:53:32.494786    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:53:32.507469    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:53:32.507482    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:53:32.542139    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:53:32.542150    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:53:32.581497    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:53:32.581506    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:53:32.602736    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:53:32.602749    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:53:32.619036    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:53:32.619048    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:53:32.633048    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:53:32.633057    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:53:32.646033    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:53:32.646044    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:53:32.659243    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:53:32.659256    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:53:32.675494    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:53:32.675508    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:53:32.695006    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:53:32.695018    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:53:32.721411    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:53:32.721428    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:53:32.726366    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:53:32.726376    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:53:32.738373    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:53:32.738384    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:53:35.254649    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:53:40.257374    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:53:40.257957    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:53:40.299246    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:53:40.299406    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:53:40.321348    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:53:40.321475    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:53:40.337144    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:53:40.337238    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:53:40.349536    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:53:40.349618    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:53:40.361012    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:53:40.361087    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:53:40.371868    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:53:40.371936    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:53:40.382183    4887 logs.go:276] 0 containers: []
	W0917 10:53:40.382200    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:53:40.382272    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:53:40.396019    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:53:40.396034    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:53:40.396040    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:53:40.400628    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:53:40.400637    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:53:40.414202    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:53:40.414217    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:53:40.425403    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:53:40.425416    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:53:40.436959    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:53:40.436972    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:53:40.448694    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:53:40.448704    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:53:40.484321    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:53:40.484331    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:53:40.505831    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:53:40.505842    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:53:40.530738    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:53:40.530748    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:53:40.542553    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:53:40.542564    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:53:40.554233    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:53:40.554244    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:53:40.565975    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:53:40.565986    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:53:40.580527    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:53:40.580538    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:53:40.595058    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:53:40.595071    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:53:40.607648    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:53:40.607662    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:53:43.143809    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:53:48.146155    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:53:48.146663    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:53:48.183277    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:53:48.183453    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:53:48.202794    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:53:48.202893    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:53:48.218381    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:53:48.218481    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:53:48.231063    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:53:48.231153    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:53:48.241892    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:53:48.241986    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:53:48.253017    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:53:48.253105    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:53:48.263635    4887 logs.go:276] 0 containers: []
	W0917 10:53:48.263647    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:53:48.263721    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:53:48.279046    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:53:48.279065    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:53:48.279071    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:53:48.283393    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:53:48.283400    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:53:48.297300    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:53:48.297312    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:53:48.308964    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:53:48.308975    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:53:48.323697    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:53:48.323707    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:53:48.345888    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:53:48.345899    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:53:48.357469    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:53:48.357480    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:53:48.391283    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:53:48.391294    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:53:48.426042    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:53:48.426052    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:53:48.437382    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:53:48.437391    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:53:48.448670    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:53:48.448679    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:53:48.460448    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:53:48.460457    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:53:48.472544    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:53:48.472555    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:53:48.487060    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:53:48.487074    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:53:48.498855    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:53:48.498868    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:53:51.025905    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:53:56.028192    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:53:56.028268    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:53:56.042538    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:53:56.042613    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:53:56.055182    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:53:56.055246    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:53:56.066269    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:53:56.066349    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:53:56.077021    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:53:56.077083    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:53:56.088207    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:53:56.088303    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:53:56.099460    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:53:56.099542    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:53:56.110482    4887 logs.go:276] 0 containers: []
	W0917 10:53:56.110496    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:53:56.110576    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:53:56.122892    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:53:56.122909    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:53:56.122915    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:53:56.141911    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:53:56.141920    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:53:56.153946    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:53:56.153961    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:53:56.167038    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:53:56.167052    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:53:56.185652    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:53:56.185661    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:53:56.201055    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:53:56.201068    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:53:56.213993    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:53:56.214006    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:53:56.227046    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:53:56.227062    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:53:56.243958    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:53:56.243975    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:53:56.270607    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:53:56.270644    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:53:56.275124    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:53:56.275133    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:53:56.313243    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:53:56.313255    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:53:56.329123    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:53:56.329139    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:53:56.342309    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:53:56.342323    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:53:56.358354    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:53:56.358366    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:53:58.897018    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:54:03.899774    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:54:03.900358    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:54:03.939485    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:54:03.939638    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:54:03.965052    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:54:03.965180    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:54:03.980166    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:54:03.980252    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:54:03.993116    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:54:03.993186    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:54:04.010700    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:54:04.010782    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:54:04.021504    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:54:04.021575    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:54:04.031888    4887 logs.go:276] 0 containers: []
	W0917 10:54:04.031899    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:54:04.031956    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:54:04.043039    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:54:04.043058    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:54:04.043065    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:54:04.059052    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:54:04.059065    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:54:04.071327    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:54:04.071337    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:54:04.087103    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:54:04.087117    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:54:04.105012    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:54:04.105021    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:54:04.116626    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:54:04.116636    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:54:04.130980    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:54:04.130991    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:54:04.166168    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:54:04.166178    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:54:04.180974    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:54:04.180987    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:54:04.185618    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:54:04.185626    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:54:04.210686    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:54:04.210699    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:54:04.227599    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:54:04.227609    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:54:04.243136    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:54:04.243147    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:54:04.255099    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:54:04.255110    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:54:04.278422    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:54:04.278430    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:54:06.813337    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:54:11.815186    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:54:11.815457    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:54:11.841390    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:54:11.841534    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:54:11.859250    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:54:11.859359    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:54:11.872973    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:54:11.873064    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:54:11.885039    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:54:11.885112    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:54:11.898361    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:54:11.898442    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:54:11.913643    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:54:11.913721    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:54:11.924379    4887 logs.go:276] 0 containers: []
	W0917 10:54:11.924390    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:54:11.924454    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:54:11.934520    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:54:11.934539    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:54:11.934545    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:54:11.938657    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:54:11.938665    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:54:11.954898    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:54:11.954911    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:54:11.979698    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:54:11.979708    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:54:12.000889    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:54:12.000899    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:54:12.012481    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:54:12.012490    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:54:12.047236    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:54:12.047247    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:54:12.065811    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:54:12.065824    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:54:12.078123    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:54:12.078135    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:54:12.090155    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:54:12.090166    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:54:12.104811    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:54:12.104822    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:54:12.116728    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:54:12.116740    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:54:12.152955    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:54:12.152968    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:54:12.165362    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:54:12.165379    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:54:12.178478    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:54:12.178493    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:54:14.692380    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:54:19.694885    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:54:19.695105    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:54:19.717225    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:54:19.717328    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:54:19.730431    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:54:19.730511    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:54:19.741398    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:54:19.741480    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:54:19.752295    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:54:19.752372    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:54:19.763255    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:54:19.763335    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:54:19.774148    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:54:19.774227    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:54:19.784709    4887 logs.go:276] 0 containers: []
	W0917 10:54:19.784719    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:54:19.784783    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:54:19.795758    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:54:19.795773    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:54:19.795777    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:54:19.808318    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:54:19.808329    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:54:19.820200    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:54:19.820210    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:54:19.837923    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:54:19.837933    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:54:19.849988    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:54:19.849998    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:54:19.865058    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:54:19.865071    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:54:19.898966    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:54:19.898974    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:54:19.935987    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:54:19.936003    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:54:19.948847    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:54:19.948856    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:54:19.965315    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:54:19.965324    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:54:19.969678    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:54:19.969687    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:54:19.985564    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:54:19.985571    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:54:20.000561    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:54:20.000567    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:54:20.024755    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:54:20.024773    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:54:20.038702    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:54:20.038715    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:54:22.557463    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:54:27.558784    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:54:27.559009    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:54:27.577044    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:54:27.577142    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:54:27.590527    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:54:27.590603    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:54:27.604488    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:54:27.604570    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:54:27.615999    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:54:27.616076    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:54:27.627245    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:54:27.627316    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:54:27.638414    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:54:27.638490    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:54:27.649762    4887 logs.go:276] 0 containers: []
	W0917 10:54:27.649777    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:54:27.649849    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:54:27.661652    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:54:27.661671    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:54:27.661677    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:54:27.673565    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:54:27.673576    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:54:27.707010    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:54:27.707022    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:54:27.711849    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:54:27.711859    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:54:27.726065    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:54:27.726075    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:54:27.743778    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:54:27.743793    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:54:27.779577    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:54:27.779591    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:54:27.797673    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:54:27.797686    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:54:27.809679    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:54:27.809693    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:54:27.834024    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:54:27.834031    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:54:27.846014    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:54:27.846028    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:54:27.857990    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:54:27.858001    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:54:27.877857    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:54:27.877868    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:54:27.892999    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:54:27.893011    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:54:27.904993    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:54:27.905003    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:54:30.419263    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:54:35.421896    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:54:35.422460    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:54:35.468408    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:54:35.468555    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:54:35.490179    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:54:35.490291    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:54:35.504738    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:54:35.504824    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:54:35.517763    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:54:35.517843    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:54:35.528886    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:54:35.528964    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:54:35.540046    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:54:35.540123    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:54:35.550805    4887 logs.go:276] 0 containers: []
	W0917 10:54:35.550818    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:54:35.550885    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:54:35.562014    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:54:35.562031    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:54:35.562036    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:54:35.575564    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:54:35.575579    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:54:35.588322    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:54:35.588334    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:54:35.602165    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:54:35.602176    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:54:35.625665    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:54:35.625674    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:54:35.660284    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:54:35.660296    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:54:35.674675    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:54:35.674686    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:54:35.687078    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:54:35.687088    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:54:35.699352    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:54:35.699362    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:54:35.735758    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:54:35.735769    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:54:35.748410    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:54:35.748421    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:54:35.763699    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:54:35.763711    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:54:35.768503    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:54:35.768509    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:54:35.782970    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:54:35.782982    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:54:35.797784    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:54:35.797797    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:54:38.317879    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:54:43.320558    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:54:43.321180    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:54:43.360052    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:54:43.360197    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:54:43.385501    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:54:43.385614    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:54:43.400471    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:54:43.400546    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:54:43.412756    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:54:43.412825    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:54:43.423216    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:54:43.423288    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:54:43.437850    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:54:43.437932    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:54:43.448222    4887 logs.go:276] 0 containers: []
	W0917 10:54:43.448236    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:54:43.448297    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:54:43.459223    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:54:43.459245    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:54:43.459250    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:54:43.474653    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:54:43.474662    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:54:43.493908    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:54:43.493927    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:54:43.525550    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:54:43.525580    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:54:43.543095    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:54:43.543109    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:54:43.555005    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:54:43.555021    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:54:43.590596    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:54:43.590604    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:54:43.615087    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:54:43.615097    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:54:43.626650    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:54:43.626661    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:54:43.631538    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:54:43.631546    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:54:43.667019    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:54:43.667035    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:54:43.679090    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:54:43.679099    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:54:43.694311    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:54:43.694322    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:54:43.709236    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:54:43.709252    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:54:43.723457    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:54:43.723466    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:54:46.235625    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:54:51.236791    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:54:51.237321    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:54:51.277726    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:54:51.277922    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:54:51.302834    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:54:51.302965    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:54:51.318396    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:54:51.318492    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:54:51.334275    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:54:51.334352    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:54:51.344278    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:54:51.344354    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:54:51.354585    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:54:51.354660    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:54:51.364998    4887 logs.go:276] 0 containers: []
	W0917 10:54:51.365010    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:54:51.365081    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:54:51.375471    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:54:51.375488    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:54:51.375494    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:54:51.393348    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:54:51.393360    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:54:51.408634    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:54:51.408644    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:54:51.426303    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:54:51.426313    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:54:51.437572    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:54:51.437583    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:54:51.462720    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:54:51.462727    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:54:51.474685    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:54:51.474695    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:54:51.509956    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:54:51.509964    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:54:51.543684    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:54:51.543695    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:54:51.560167    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:54:51.560178    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:54:51.575404    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:54:51.575414    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:54:51.587340    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:54:51.587355    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:54:51.592261    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:54:51.592269    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:54:51.607524    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:54:51.607538    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:54:51.619306    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:54:51.619315    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:54:54.132921    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:54:59.135171    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:54:59.135662    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 10:54:59.173806    4887 logs.go:276] 1 containers: [64c069638ec7]
	I0917 10:54:59.173942    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 10:54:59.196354    4887 logs.go:276] 1 containers: [f69d89bf5ab7]
	I0917 10:54:59.196484    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 10:54:59.211821    4887 logs.go:276] 4 containers: [cd9522d7aaf7 6681eba03363 9a194630c6b2 3055fef16936]
	I0917 10:54:59.211911    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 10:54:59.225530    4887 logs.go:276] 1 containers: [87476a242608]
	I0917 10:54:59.225605    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 10:54:59.235943    4887 logs.go:276] 1 containers: [2b4acd0bea8a]
	I0917 10:54:59.236024    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 10:54:59.246365    4887 logs.go:276] 1 containers: [a21c2f40d4cf]
	I0917 10:54:59.246431    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 10:54:59.256997    4887 logs.go:276] 0 containers: []
	W0917 10:54:59.257008    4887 logs.go:278] No container was found matching "kindnet"
	I0917 10:54:59.257075    4887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 10:54:59.271578    4887 logs.go:276] 1 containers: [57fca782690d]
	I0917 10:54:59.271594    4887 logs.go:123] Gathering logs for kube-apiserver [64c069638ec7] ...
	I0917 10:54:59.271600    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64c069638ec7"
	I0917 10:54:59.286076    4887 logs.go:123] Gathering logs for etcd [f69d89bf5ab7] ...
	I0917 10:54:59.286090    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f69d89bf5ab7"
	I0917 10:54:59.299891    4887 logs.go:123] Gathering logs for kube-proxy [2b4acd0bea8a] ...
	I0917 10:54:59.299902    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b4acd0bea8a"
	I0917 10:54:59.311866    4887 logs.go:123] Gathering logs for Docker ...
	I0917 10:54:59.311876    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 10:54:59.336630    4887 logs.go:123] Gathering logs for kubelet ...
	I0917 10:54:59.336640    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 10:54:59.371389    4887 logs.go:123] Gathering logs for dmesg ...
	I0917 10:54:59.371401    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 10:54:59.375867    4887 logs.go:123] Gathering logs for coredns [6681eba03363] ...
	I0917 10:54:59.375876    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6681eba03363"
	I0917 10:54:59.387324    4887 logs.go:123] Gathering logs for coredns [3055fef16936] ...
	I0917 10:54:59.387336    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3055fef16936"
	I0917 10:54:59.399653    4887 logs.go:123] Gathering logs for coredns [cd9522d7aaf7] ...
	I0917 10:54:59.399664    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd9522d7aaf7"
	I0917 10:54:59.411214    4887 logs.go:123] Gathering logs for coredns [9a194630c6b2] ...
	I0917 10:54:59.411226    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a194630c6b2"
	I0917 10:54:59.422782    4887 logs.go:123] Gathering logs for kube-scheduler [87476a242608] ...
	I0917 10:54:59.422805    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87476a242608"
	I0917 10:54:59.436763    4887 logs.go:123] Gathering logs for storage-provisioner [57fca782690d] ...
	I0917 10:54:59.436774    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fca782690d"
	I0917 10:54:59.448554    4887 logs.go:123] Gathering logs for describe nodes ...
	I0917 10:54:59.448563    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 10:54:59.484889    4887 logs.go:123] Gathering logs for kube-controller-manager [a21c2f40d4cf] ...
	I0917 10:54:59.484902    4887 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a21c2f40d4cf"
	I0917 10:54:59.502354    4887 logs.go:123] Gathering logs for container status ...
	I0917 10:54:59.502366    4887 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 10:55:02.016229    4887 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 10:55:07.018966    4887 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 10:55:07.028912    4887 out.go:201] 
	W0917 10:55:07.033159    4887 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0917 10:55:07.033194    4887 out.go:270] * 
	* 
	W0917 10:55:07.035645    4887 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:55:07.052021    4887 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-293000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (572.87s)

                                                
                                    
x
+
TestPause/serial/Start (9.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-452000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-452000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.833211583s)

                                                
                                                
-- stdout --
	* [pause-452000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-452000" primary control-plane node in "pause-452000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-452000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-452000 -n pause-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-452000 -n pause-452000: exit status 7 (29.904083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-498000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-498000 --driver=qemu2 : exit status 80 (9.761949709s)

                                                
                                                
-- stdout --
	* [NoKubernetes-498000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-498000" primary control-plane node in "NoKubernetes-498000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-498000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-498000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-498000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-498000 -n NoKubernetes-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-498000 -n NoKubernetes-498000: exit status 7 (63.456416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-498000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-498000 --no-kubernetes --driver=qemu2 : exit status 80 (5.251801125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-498000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-498000
	* Restarting existing qemu2 VM for "NoKubernetes-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-498000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-498000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-498000 -n NoKubernetes-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-498000 -n NoKubernetes-498000: exit status 7 (58.905791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-498000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-498000 --no-kubernetes --driver=qemu2 : exit status 80 (5.245027166s)

                                                
                                                
-- stdout --
	* [NoKubernetes-498000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-498000
	* Restarting existing qemu2 VM for "NoKubernetes-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-498000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-498000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-498000 -n NoKubernetes-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-498000 -n NoKubernetes-498000: exit status 7 (52.645833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-498000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-498000 --driver=qemu2 : exit status 80 (5.265699458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-498000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-498000
	* Restarting existing qemu2 VM for "NoKubernetes-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-498000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-498000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-498000 -n NoKubernetes-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-498000 -n NoKubernetes-498000: exit status 7 (62.867416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.826318791s)

                                                
                                                
-- stdout --
	* [auto-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-344000" primary control-plane node in "auto-344000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-344000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:53:21.573443    5125 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:53:21.573616    5125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:53:21.573620    5125 out.go:358] Setting ErrFile to fd 2...
	I0917 10:53:21.573622    5125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:53:21.573748    5125 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:53:21.574830    5125 out.go:352] Setting JSON to false
	I0917 10:53:21.591454    5125 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4964,"bootTime":1726590637,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:53:21.591528    5125 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:53:21.597489    5125 out.go:177] * [auto-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:53:21.605470    5125 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:53:21.605541    5125 notify.go:220] Checking for updates...
	I0917 10:53:21.610904    5125 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:53:21.616048    5125 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:53:21.619426    5125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:53:21.622467    5125 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:53:21.625409    5125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:53:21.628795    5125 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:53:21.628859    5125 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:53:21.628925    5125 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:53:21.633494    5125 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:53:21.640427    5125 start.go:297] selected driver: qemu2
	I0917 10:53:21.640433    5125 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:53:21.640439    5125 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:53:21.642748    5125 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:53:21.645488    5125 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:53:21.648466    5125 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:53:21.648490    5125 cni.go:84] Creating CNI manager for ""
	I0917 10:53:21.648512    5125 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:53:21.648518    5125 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:53:21.648538    5125 start.go:340] cluster config:
	{Name:auto-344000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:53:21.652136    5125 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:53:21.659309    5125 out.go:177] * Starting "auto-344000" primary control-plane node in "auto-344000" cluster
	I0917 10:53:21.663457    5125 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:53:21.663473    5125 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:53:21.663482    5125 cache.go:56] Caching tarball of preloaded images
	I0917 10:53:21.663548    5125 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:53:21.663553    5125 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:53:21.663633    5125 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/auto-344000/config.json ...
	I0917 10:53:21.663644    5125 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/auto-344000/config.json: {Name:mk035838f12bf4ea72829abfd16957be83c169f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:53:21.664043    5125 start.go:360] acquireMachinesLock for auto-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:53:21.664073    5125 start.go:364] duration metric: took 25.041µs to acquireMachinesLock for "auto-344000"
	I0917 10:53:21.664084    5125 start.go:93] Provisioning new machine with config: &{Name:auto-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:53:21.664111    5125 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:53:21.667420    5125 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:53:21.682350    5125 start.go:159] libmachine.API.Create for "auto-344000" (driver="qemu2")
	I0917 10:53:21.682402    5125 client.go:168] LocalClient.Create starting
	I0917 10:53:21.682467    5125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:53:21.682496    5125 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:21.682505    5125 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:21.682549    5125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:53:21.682576    5125 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:21.682587    5125 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:21.683063    5125 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:53:21.846484    5125 main.go:141] libmachine: Creating SSH key...
	I0917 10:53:21.946167    5125 main.go:141] libmachine: Creating Disk image...
	I0917 10:53:21.946175    5125 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:53:21.946375    5125 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/disk.qcow2
	I0917 10:53:21.955590    5125 main.go:141] libmachine: STDOUT: 
	I0917 10:53:21.955614    5125 main.go:141] libmachine: STDERR: 
	I0917 10:53:21.955672    5125 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/disk.qcow2 +20000M
	I0917 10:53:21.963603    5125 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:53:21.963617    5125 main.go:141] libmachine: STDERR: 
	I0917 10:53:21.963635    5125 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/disk.qcow2
	I0917 10:53:21.963640    5125 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:53:21.963650    5125 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:53:21.963675    5125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:2f:f5:36:3d:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/disk.qcow2
	I0917 10:53:21.965356    5125 main.go:141] libmachine: STDOUT: 
	I0917 10:53:21.965371    5125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:53:21.965394    5125 client.go:171] duration metric: took 282.993958ms to LocalClient.Create
	I0917 10:53:23.967411    5125 start.go:128] duration metric: took 2.303362292s to createHost
	I0917 10:53:23.967458    5125 start.go:83] releasing machines lock for "auto-344000", held for 2.303425791s
	W0917 10:53:23.967499    5125 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:53:23.977398    5125 out.go:177] * Deleting "auto-344000" in qemu2 ...
	W0917 10:53:23.997345    5125 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:53:23.997353    5125 start.go:729] Will try again in 5 seconds ...
	I0917 10:53:28.997562    5125 start.go:360] acquireMachinesLock for auto-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:53:28.998063    5125 start.go:364] duration metric: took 404.042µs to acquireMachinesLock for "auto-344000"
	I0917 10:53:28.998205    5125 start.go:93] Provisioning new machine with config: &{Name:auto-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:53:28.998475    5125 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:53:29.003988    5125 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:53:29.046186    5125 start.go:159] libmachine.API.Create for "auto-344000" (driver="qemu2")
	I0917 10:53:29.046233    5125 client.go:168] LocalClient.Create starting
	I0917 10:53:29.046354    5125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:53:29.046413    5125 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:29.046425    5125 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:29.046484    5125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:53:29.046528    5125 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:29.046550    5125 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:29.047173    5125 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:53:29.218148    5125 main.go:141] libmachine: Creating SSH key...
	I0917 10:53:29.307521    5125 main.go:141] libmachine: Creating Disk image...
	I0917 10:53:29.307526    5125 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:53:29.307716    5125 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/disk.qcow2
	I0917 10:53:29.317067    5125 main.go:141] libmachine: STDOUT: 
	I0917 10:53:29.317092    5125 main.go:141] libmachine: STDERR: 
	I0917 10:53:29.317151    5125 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/disk.qcow2 +20000M
	I0917 10:53:29.325132    5125 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:53:29.325153    5125 main.go:141] libmachine: STDERR: 
	I0917 10:53:29.325164    5125 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/disk.qcow2
	I0917 10:53:29.325169    5125 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:53:29.325176    5125 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:53:29.325213    5125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:4d:8a:35:b3:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/auto-344000/disk.qcow2
	I0917 10:53:29.326893    5125 main.go:141] libmachine: STDOUT: 
	I0917 10:53:29.326908    5125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:53:29.326921    5125 client.go:171] duration metric: took 280.690042ms to LocalClient.Create
	I0917 10:53:31.329061    5125 start.go:128] duration metric: took 2.330628416s to createHost
	I0917 10:53:31.329136    5125 start.go:83] releasing machines lock for "auto-344000", held for 2.331118417s
	W0917 10:53:31.329467    5125 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:53:31.338891    5125 out.go:201] 
	W0917 10:53:31.346126    5125 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:53:31.346150    5125 out.go:270] * 
	* 
	W0917 10:53:31.348421    5125 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:53:31.357074    5125 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
E0917 10:53:42.237743    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.94929075s)

                                                
                                                
-- stdout --
	* [calico-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-344000" primary control-plane node in "calico-344000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-344000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:53:33.552472    5234 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:53:33.552616    5234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:53:33.552620    5234 out.go:358] Setting ErrFile to fd 2...
	I0917 10:53:33.552622    5234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:53:33.552761    5234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:53:33.553874    5234 out.go:352] Setting JSON to false
	I0917 10:53:33.570222    5234 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4976,"bootTime":1726590637,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:53:33.570298    5234 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:53:33.575830    5234 out.go:177] * [calico-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:53:33.583693    5234 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:53:33.583778    5234 notify.go:220] Checking for updates...
	I0917 10:53:33.589603    5234 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:53:33.592616    5234 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:53:33.599548    5234 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:53:33.603609    5234 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:53:33.606526    5234 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:53:33.609893    5234 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:53:33.609959    5234 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:53:33.610008    5234 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:53:33.613635    5234 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:53:33.620501    5234 start.go:297] selected driver: qemu2
	I0917 10:53:33.620507    5234 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:53:33.620512    5234 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:53:33.622793    5234 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:53:33.625532    5234 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:53:33.628594    5234 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:53:33.628610    5234 cni.go:84] Creating CNI manager for "calico"
	I0917 10:53:33.628617    5234 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0917 10:53:33.628642    5234 start.go:340] cluster config:
	{Name:calico-344000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:53:33.631943    5234 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:53:33.639606    5234 out.go:177] * Starting "calico-344000" primary control-plane node in "calico-344000" cluster
	I0917 10:53:33.643419    5234 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:53:33.643434    5234 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:53:33.643442    5234 cache.go:56] Caching tarball of preloaded images
	I0917 10:53:33.643490    5234 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:53:33.643495    5234 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:53:33.643543    5234 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/calico-344000/config.json ...
	I0917 10:53:33.643553    5234 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/calico-344000/config.json: {Name:mk27cd6e51084806fde6aab4cb2b9cef6a1b360f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:53:33.643951    5234 start.go:360] acquireMachinesLock for calico-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:53:33.643982    5234 start.go:364] duration metric: took 24.333µs to acquireMachinesLock for "calico-344000"
	I0917 10:53:33.643990    5234 start.go:93] Provisioning new machine with config: &{Name:calico-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:53:33.644034    5234 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:53:33.652439    5234 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:53:33.667414    5234 start.go:159] libmachine.API.Create for "calico-344000" (driver="qemu2")
	I0917 10:53:33.667437    5234 client.go:168] LocalClient.Create starting
	I0917 10:53:33.667489    5234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:53:33.667520    5234 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:33.667529    5234 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:33.667564    5234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:53:33.667587    5234 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:33.667595    5234 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:33.668090    5234 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:53:33.831348    5234 main.go:141] libmachine: Creating SSH key...
	I0917 10:53:34.018363    5234 main.go:141] libmachine: Creating Disk image...
	I0917 10:53:34.018378    5234 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:53:34.018655    5234 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/disk.qcow2
	I0917 10:53:34.028815    5234 main.go:141] libmachine: STDOUT: 
	I0917 10:53:34.028847    5234 main.go:141] libmachine: STDERR: 
	I0917 10:53:34.028920    5234 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/disk.qcow2 +20000M
	I0917 10:53:34.037415    5234 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:53:34.037430    5234 main.go:141] libmachine: STDERR: 
	I0917 10:53:34.037449    5234 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/disk.qcow2
	I0917 10:53:34.037455    5234 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:53:34.037468    5234 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:53:34.037503    5234 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:fe:55:d7:31:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/disk.qcow2
	I0917 10:53:34.039145    5234 main.go:141] libmachine: STDOUT: 
	I0917 10:53:34.039159    5234 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:53:34.039178    5234 client.go:171] duration metric: took 371.747417ms to LocalClient.Create
	I0917 10:53:36.041221    5234 start.go:128] duration metric: took 2.397247917s to createHost
	I0917 10:53:36.041263    5234 start.go:83] releasing machines lock for "calico-344000", held for 2.397350667s
	W0917 10:53:36.041290    5234 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:53:36.056510    5234 out.go:177] * Deleting "calico-344000" in qemu2 ...
	W0917 10:53:36.083644    5234 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:53:36.083660    5234 start.go:729] Will try again in 5 seconds ...
	I0917 10:53:41.085620    5234 start.go:360] acquireMachinesLock for calico-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:53:41.085898    5234 start.go:364] duration metric: took 215.208µs to acquireMachinesLock for "calico-344000"
	I0917 10:53:41.085934    5234 start.go:93] Provisioning new machine with config: &{Name:calico-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:53:41.086046    5234 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:53:41.094057    5234 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:53:41.123804    5234 start.go:159] libmachine.API.Create for "calico-344000" (driver="qemu2")
	I0917 10:53:41.123843    5234 client.go:168] LocalClient.Create starting
	I0917 10:53:41.123947    5234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:53:41.123993    5234 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:41.124005    5234 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:41.124051    5234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:53:41.124087    5234 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:41.124100    5234 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:41.124638    5234 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:53:41.292189    5234 main.go:141] libmachine: Creating SSH key...
	I0917 10:53:41.409444    5234 main.go:141] libmachine: Creating Disk image...
	I0917 10:53:41.409456    5234 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:53:41.409664    5234 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/disk.qcow2
	I0917 10:53:41.419043    5234 main.go:141] libmachine: STDOUT: 
	I0917 10:53:41.419059    5234 main.go:141] libmachine: STDERR: 
	I0917 10:53:41.419122    5234 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/disk.qcow2 +20000M
	I0917 10:53:41.427036    5234 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:53:41.427050    5234 main.go:141] libmachine: STDERR: 
	I0917 10:53:41.427065    5234 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/disk.qcow2
	I0917 10:53:41.427069    5234 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:53:41.427077    5234 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:53:41.427118    5234 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:9f:ff:47:e6:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/calico-344000/disk.qcow2
	I0917 10:53:41.428719    5234 main.go:141] libmachine: STDOUT: 
	I0917 10:53:41.428734    5234 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:53:41.428748    5234 client.go:171] duration metric: took 304.9095ms to LocalClient.Create
	I0917 10:53:43.430883    5234 start.go:128] duration metric: took 2.3448725s to createHost
	I0917 10:53:43.430965    5234 start.go:83] releasing machines lock for "calico-344000", held for 2.345122375s
	W0917 10:53:43.431326    5234 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:53:43.443869    5234 out.go:201] 
	W0917 10:53:43.447045    5234 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:53:43.447088    5234 out.go:270] * 
	* 
	W0917 10:53:43.449521    5234 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:53:43.459894    5234 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.829492792s)

                                                
                                                
-- stdout --
	* [custom-flannel-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-344000" primary control-plane node in "custom-flannel-344000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-344000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:53:45.834001    5355 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:53:45.834135    5355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:53:45.834139    5355 out.go:358] Setting ErrFile to fd 2...
	I0917 10:53:45.834141    5355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:53:45.834268    5355 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:53:45.835333    5355 out.go:352] Setting JSON to false
	I0917 10:53:45.851678    5355 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4988,"bootTime":1726590637,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:53:45.851744    5355 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:53:45.855693    5355 out.go:177] * [custom-flannel-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:53:45.864575    5355 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:53:45.864622    5355 notify.go:220] Checking for updates...
	I0917 10:53:45.870508    5355 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:53:45.873536    5355 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:53:45.876550    5355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:53:45.879533    5355 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:53:45.882603    5355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:53:45.884590    5355 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:53:45.884659    5355 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:53:45.884704    5355 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:53:45.888575    5355 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:53:45.895406    5355 start.go:297] selected driver: qemu2
	I0917 10:53:45.895413    5355 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:53:45.895418    5355 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:53:45.897513    5355 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:53:45.900555    5355 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:53:45.903610    5355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:53:45.903629    5355 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0917 10:53:45.903641    5355 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0917 10:53:45.903672    5355 start.go:340] cluster config:
	{Name:custom-flannel-344000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:53:45.907103    5355 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:53:45.914513    5355 out.go:177] * Starting "custom-flannel-344000" primary control-plane node in "custom-flannel-344000" cluster
	I0917 10:53:45.918557    5355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:53:45.918569    5355 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:53:45.918573    5355 cache.go:56] Caching tarball of preloaded images
	I0917 10:53:45.918625    5355 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:53:45.918629    5355 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:53:45.918681    5355 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/custom-flannel-344000/config.json ...
	I0917 10:53:45.918694    5355 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/custom-flannel-344000/config.json: {Name:mk8797839cdd82f8863355a7b8fef2560be801d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:53:45.918909    5355 start.go:360] acquireMachinesLock for custom-flannel-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:53:45.918947    5355 start.go:364] duration metric: took 31.583µs to acquireMachinesLock for "custom-flannel-344000"
	I0917 10:53:45.918957    5355 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:53:45.918990    5355 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:53:45.926572    5355 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:53:45.941480    5355 start.go:159] libmachine.API.Create for "custom-flannel-344000" (driver="qemu2")
	I0917 10:53:45.941506    5355 client.go:168] LocalClient.Create starting
	I0917 10:53:45.941573    5355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:53:45.941606    5355 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:45.941618    5355 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:45.941665    5355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:53:45.941698    5355 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:45.941705    5355 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:45.942122    5355 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:53:46.112802    5355 main.go:141] libmachine: Creating SSH key...
	I0917 10:53:46.180203    5355 main.go:141] libmachine: Creating Disk image...
	I0917 10:53:46.180211    5355 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:53:46.180424    5355 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/disk.qcow2
	I0917 10:53:46.189633    5355 main.go:141] libmachine: STDOUT: 
	I0917 10:53:46.189654    5355 main.go:141] libmachine: STDERR: 
	I0917 10:53:46.189719    5355 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/disk.qcow2 +20000M
	I0917 10:53:46.197745    5355 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:53:46.197761    5355 main.go:141] libmachine: STDERR: 
	I0917 10:53:46.197774    5355 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/disk.qcow2
	I0917 10:53:46.197780    5355 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:53:46.197791    5355 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:53:46.197820    5355 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:2c:88:1d:d2:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/disk.qcow2
	I0917 10:53:46.199434    5355 main.go:141] libmachine: STDOUT: 
	I0917 10:53:46.199448    5355 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:53:46.199468    5355 client.go:171] duration metric: took 257.964166ms to LocalClient.Create
	I0917 10:53:48.201505    5355 start.go:128] duration metric: took 2.282577334s to createHost
	I0917 10:53:48.201544    5355 start.go:83] releasing machines lock for "custom-flannel-344000", held for 2.282662583s
	W0917 10:53:48.201568    5355 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:53:48.216327    5355 out.go:177] * Deleting "custom-flannel-344000" in qemu2 ...
	W0917 10:53:48.229843    5355 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:53:48.229856    5355 start.go:729] Will try again in 5 seconds ...
	I0917 10:53:53.231659    5355 start.go:360] acquireMachinesLock for custom-flannel-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:53:53.232423    5355 start.go:364] duration metric: took 613.708µs to acquireMachinesLock for "custom-flannel-344000"
	I0917 10:53:53.232528    5355 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:53:53.232811    5355 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:53:53.240543    5355 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:53:53.288595    5355 start.go:159] libmachine.API.Create for "custom-flannel-344000" (driver="qemu2")
	I0917 10:53:53.288649    5355 client.go:168] LocalClient.Create starting
	I0917 10:53:53.288790    5355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:53:53.288860    5355 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:53.288878    5355 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:53.288955    5355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:53:53.289005    5355 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:53.289021    5355 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:53.289646    5355 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:53:53.474847    5355 main.go:141] libmachine: Creating SSH key...
	I0917 10:53:53.571975    5355 main.go:141] libmachine: Creating Disk image...
	I0917 10:53:53.571982    5355 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:53:53.572197    5355 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/disk.qcow2
	I0917 10:53:53.581691    5355 main.go:141] libmachine: STDOUT: 
	I0917 10:53:53.581719    5355 main.go:141] libmachine: STDERR: 
	I0917 10:53:53.581778    5355 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/disk.qcow2 +20000M
	I0917 10:53:53.589872    5355 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:53:53.589893    5355 main.go:141] libmachine: STDERR: 
	I0917 10:53:53.589904    5355 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/disk.qcow2
	I0917 10:53:53.589909    5355 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:53:53.589917    5355 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:53:53.589955    5355 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:3d:75:9f:94:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/custom-flannel-344000/disk.qcow2
	I0917 10:53:53.591636    5355 main.go:141] libmachine: STDOUT: 
	I0917 10:53:53.591655    5355 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:53:53.591671    5355 client.go:171] duration metric: took 303.025917ms to LocalClient.Create
	I0917 10:53:55.593824    5355 start.go:128] duration metric: took 2.361041333s to createHost
	I0917 10:53:55.593894    5355 start.go:83] releasing machines lock for "custom-flannel-344000", held for 2.361497125s
	W0917 10:53:55.594274    5355 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:53:55.605953    5355 out.go:201] 
	W0917 10:53:55.607831    5355 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:53:55.607863    5355 out.go:270] * 
	* 
	W0917 10:53:55.609690    5355 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:53:55.622844    5355 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
E0917 10:54:06.392158    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.921768375s)

                                                
                                                
-- stdout --
	* [false-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-344000" primary control-plane node in "false-344000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-344000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:53:58.066442    5477 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:53:58.066587    5477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:53:58.066595    5477 out.go:358] Setting ErrFile to fd 2...
	I0917 10:53:58.066598    5477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:53:58.066732    5477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:53:58.067889    5477 out.go:352] Setting JSON to false
	I0917 10:53:58.084249    5477 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5001,"bootTime":1726590637,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:53:58.084352    5477 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:53:58.090593    5477 out.go:177] * [false-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:53:58.098571    5477 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:53:58.098597    5477 notify.go:220] Checking for updates...
	I0917 10:53:58.104056    5477 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:53:58.107432    5477 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:53:58.110585    5477 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:53:58.113548    5477 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:53:58.116494    5477 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:53:58.119778    5477 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:53:58.119847    5477 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:53:58.119900    5477 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:53:58.124512    5477 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:53:58.131543    5477 start.go:297] selected driver: qemu2
	I0917 10:53:58.131549    5477 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:53:58.131558    5477 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:53:58.133830    5477 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:53:58.136482    5477 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:53:58.139582    5477 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:53:58.139604    5477 cni.go:84] Creating CNI manager for "false"
	I0917 10:53:58.139650    5477 start.go:340] cluster config:
	{Name:false-344000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:53:58.143286    5477 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:53:58.150394    5477 out.go:177] * Starting "false-344000" primary control-plane node in "false-344000" cluster
	I0917 10:53:58.154482    5477 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:53:58.154505    5477 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:53:58.154511    5477 cache.go:56] Caching tarball of preloaded images
	I0917 10:53:58.154572    5477 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:53:58.154578    5477 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:53:58.154627    5477 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/false-344000/config.json ...
	I0917 10:53:58.154638    5477 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/false-344000/config.json: {Name:mk3e572ba638969cb386ef971738be2ba766e570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:53:58.154853    5477 start.go:360] acquireMachinesLock for false-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:53:58.154886    5477 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "false-344000"
	I0917 10:53:58.154896    5477 start.go:93] Provisioning new machine with config: &{Name:false-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:53:58.154927    5477 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:53:58.162488    5477 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:53:58.178680    5477 start.go:159] libmachine.API.Create for "false-344000" (driver="qemu2")
	I0917 10:53:58.178711    5477 client.go:168] LocalClient.Create starting
	I0917 10:53:58.178766    5477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:53:58.178800    5477 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:58.178809    5477 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:58.178853    5477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:53:58.178879    5477 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:58.178889    5477 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:58.179246    5477 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:53:58.343383    5477 main.go:141] libmachine: Creating SSH key...
	I0917 10:53:58.551301    5477 main.go:141] libmachine: Creating Disk image...
	I0917 10:53:58.551315    5477 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:53:58.551552    5477 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/disk.qcow2
	I0917 10:53:58.561194    5477 main.go:141] libmachine: STDOUT: 
	I0917 10:53:58.561211    5477 main.go:141] libmachine: STDERR: 
	I0917 10:53:58.561280    5477 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/disk.qcow2 +20000M
	I0917 10:53:58.569563    5477 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:53:58.569587    5477 main.go:141] libmachine: STDERR: 
	I0917 10:53:58.569600    5477 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/disk.qcow2
	I0917 10:53:58.569607    5477 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:53:58.569619    5477 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:53:58.569652    5477 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:7d:5e:27:84:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/disk.qcow2
	I0917 10:53:58.571334    5477 main.go:141] libmachine: STDOUT: 
	I0917 10:53:58.571353    5477 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:53:58.571376    5477 client.go:171] duration metric: took 392.670709ms to LocalClient.Create
	I0917 10:54:00.573523    5477 start.go:128] duration metric: took 2.41863675s to createHost
	I0917 10:54:00.573596    5477 start.go:83] releasing machines lock for "false-344000", held for 2.41877525s
	W0917 10:54:00.573647    5477 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:00.589218    5477 out.go:177] * Deleting "false-344000" in qemu2 ...
	W0917 10:54:00.619351    5477 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:00.619381    5477 start.go:729] Will try again in 5 seconds ...
	I0917 10:54:05.621452    5477 start.go:360] acquireMachinesLock for false-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:54:05.621898    5477 start.go:364] duration metric: took 368.708µs to acquireMachinesLock for "false-344000"
	I0917 10:54:05.622013    5477 start.go:93] Provisioning new machine with config: &{Name:false-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:54:05.622252    5477 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:54:05.632792    5477 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:54:05.671774    5477 start.go:159] libmachine.API.Create for "false-344000" (driver="qemu2")
	I0917 10:54:05.671836    5477 client.go:168] LocalClient.Create starting
	I0917 10:54:05.671958    5477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:54:05.672027    5477 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:05.672044    5477 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:05.672112    5477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:54:05.672154    5477 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:05.672167    5477 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:05.672709    5477 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:54:05.840792    5477 main.go:141] libmachine: Creating SSH key...
	I0917 10:54:05.896954    5477 main.go:141] libmachine: Creating Disk image...
	I0917 10:54:05.896964    5477 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:54:05.897146    5477 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/disk.qcow2
	I0917 10:54:05.906712    5477 main.go:141] libmachine: STDOUT: 
	I0917 10:54:05.906729    5477 main.go:141] libmachine: STDERR: 
	I0917 10:54:05.906786    5477 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/disk.qcow2 +20000M
	I0917 10:54:05.915032    5477 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:54:05.915049    5477 main.go:141] libmachine: STDERR: 
	I0917 10:54:05.915061    5477 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/disk.qcow2
	I0917 10:54:05.915067    5477 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:54:05.915074    5477 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:54:05.915111    5477 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:39:2c:29:2d:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/false-344000/disk.qcow2
	I0917 10:54:05.916842    5477 main.go:141] libmachine: STDOUT: 
	I0917 10:54:05.916855    5477 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:54:05.916867    5477 client.go:171] duration metric: took 245.031542ms to LocalClient.Create
	I0917 10:54:07.918991    5477 start.go:128] duration metric: took 2.296754334s to createHost
	I0917 10:54:07.919044    5477 start.go:83] releasing machines lock for "false-344000", held for 2.297202167s
	W0917 10:54:07.919341    5477 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:07.926751    5477 out.go:201] 
	W0917 10:54:07.932864    5477 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:54:07.932883    5477 out.go:270] * 
	* 
	W0917 10:54:07.934740    5477 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:54:07.944764    5477 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.867371958s)

                                                
                                                
-- stdout --
	* [kindnet-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-344000" primary control-plane node in "kindnet-344000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-344000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:54:10.176206    5586 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:54:10.176357    5586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:54:10.176361    5586 out.go:358] Setting ErrFile to fd 2...
	I0917 10:54:10.176367    5586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:54:10.176518    5586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:54:10.177726    5586 out.go:352] Setting JSON to false
	I0917 10:54:10.194279    5586 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5013,"bootTime":1726590637,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:54:10.194347    5586 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:54:10.200251    5586 out.go:177] * [kindnet-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:54:10.208335    5586 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:54:10.208369    5586 notify.go:220] Checking for updates...
	I0917 10:54:10.214249    5586 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:54:10.217229    5586 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:54:10.220320    5586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:54:10.223201    5586 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:54:10.226247    5586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:54:10.229564    5586 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:54:10.229635    5586 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:54:10.229684    5586 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:54:10.234305    5586 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:54:10.241197    5586 start.go:297] selected driver: qemu2
	I0917 10:54:10.241203    5586 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:54:10.241209    5586 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:54:10.243544    5586 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:54:10.247174    5586 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:54:10.250335    5586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:54:10.250354    5586 cni.go:84] Creating CNI manager for "kindnet"
	I0917 10:54:10.250358    5586 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 10:54:10.250410    5586 start.go:340] cluster config:
	{Name:kindnet-344000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:54:10.254168    5586 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:54:10.262291    5586 out.go:177] * Starting "kindnet-344000" primary control-plane node in "kindnet-344000" cluster
	I0917 10:54:10.266083    5586 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:54:10.266097    5586 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:54:10.266103    5586 cache.go:56] Caching tarball of preloaded images
	I0917 10:54:10.266162    5586 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:54:10.266168    5586 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:54:10.266226    5586 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/kindnet-344000/config.json ...
	I0917 10:54:10.266241    5586 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/kindnet-344000/config.json: {Name:mk4e2190eff9fe80a86b5346c02f1bde99f2da8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:54:10.266459    5586 start.go:360] acquireMachinesLock for kindnet-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:54:10.266493    5586 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "kindnet-344000"
	I0917 10:54:10.266504    5586 start.go:93] Provisioning new machine with config: &{Name:kindnet-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:54:10.266531    5586 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:54:10.274325    5586 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:54:10.291756    5586 start.go:159] libmachine.API.Create for "kindnet-344000" (driver="qemu2")
	I0917 10:54:10.291791    5586 client.go:168] LocalClient.Create starting
	I0917 10:54:10.291857    5586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:54:10.291886    5586 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:10.291894    5586 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:10.291935    5586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:54:10.291958    5586 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:10.291967    5586 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:10.292319    5586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:54:10.455835    5586 main.go:141] libmachine: Creating SSH key...
	I0917 10:54:10.524112    5586 main.go:141] libmachine: Creating Disk image...
	I0917 10:54:10.524119    5586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:54:10.524297    5586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/disk.qcow2
	I0917 10:54:10.533401    5586 main.go:141] libmachine: STDOUT: 
	I0917 10:54:10.533419    5586 main.go:141] libmachine: STDERR: 
	I0917 10:54:10.533476    5586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/disk.qcow2 +20000M
	I0917 10:54:10.541464    5586 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:54:10.541481    5586 main.go:141] libmachine: STDERR: 
	I0917 10:54:10.541502    5586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/disk.qcow2
	I0917 10:54:10.541508    5586 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:54:10.541522    5586 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:54:10.541557    5586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:be:07:de:68:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/disk.qcow2
	I0917 10:54:10.543179    5586 main.go:141] libmachine: STDOUT: 
	I0917 10:54:10.543193    5586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:54:10.543214    5586 client.go:171] duration metric: took 251.424541ms to LocalClient.Create
	I0917 10:54:12.545343    5586 start.go:128] duration metric: took 2.278854125s to createHost
	I0917 10:54:12.545422    5586 start.go:83] releasing machines lock for "kindnet-344000", held for 2.278988959s
	W0917 10:54:12.545497    5586 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:12.558949    5586 out.go:177] * Deleting "kindnet-344000" in qemu2 ...
	W0917 10:54:12.590027    5586 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:12.590058    5586 start.go:729] Will try again in 5 seconds ...
	I0917 10:54:17.592113    5586 start.go:360] acquireMachinesLock for kindnet-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:54:17.592712    5586 start.go:364] duration metric: took 475.041µs to acquireMachinesLock for "kindnet-344000"
	I0917 10:54:17.592815    5586 start.go:93] Provisioning new machine with config: &{Name:kindnet-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:54:17.593120    5586 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:54:17.600006    5586 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:54:17.649066    5586 start.go:159] libmachine.API.Create for "kindnet-344000" (driver="qemu2")
	I0917 10:54:17.649149    5586 client.go:168] LocalClient.Create starting
	I0917 10:54:17.649296    5586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:54:17.649391    5586 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:17.649408    5586 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:17.649468    5586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:54:17.649514    5586 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:17.649531    5586 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:17.650041    5586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:54:17.821758    5586 main.go:141] libmachine: Creating SSH key...
	I0917 10:54:17.961818    5586 main.go:141] libmachine: Creating Disk image...
	I0917 10:54:17.961830    5586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:54:17.962039    5586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/disk.qcow2
	I0917 10:54:17.971698    5586 main.go:141] libmachine: STDOUT: 
	I0917 10:54:17.971718    5586 main.go:141] libmachine: STDERR: 
	I0917 10:54:17.971784    5586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/disk.qcow2 +20000M
	I0917 10:54:17.979757    5586 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:54:17.979774    5586 main.go:141] libmachine: STDERR: 
	I0917 10:54:17.979795    5586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/disk.qcow2
	I0917 10:54:17.979801    5586 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:54:17.979809    5586 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:54:17.979834    5586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:bc:c7:4f:81:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kindnet-344000/disk.qcow2
	I0917 10:54:17.981465    5586 main.go:141] libmachine: STDOUT: 
	I0917 10:54:17.981481    5586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:54:17.981494    5586 client.go:171] duration metric: took 332.350792ms to LocalClient.Create
	I0917 10:54:19.982706    5586 start.go:128] duration metric: took 2.389625833s to createHost
	I0917 10:54:19.982727    5586 start.go:83] releasing machines lock for "kindnet-344000", held for 2.390043125s
	W0917 10:54:19.982841    5586 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:19.991137    5586 out.go:201] 
	W0917 10:54:19.997145    5586 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:54:19.997155    5586 out.go:270] * 
	* 
	W0917 10:54:19.997653    5586 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:54:20.004051    5586 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.788561625s)

                                                
                                                
-- stdout --
	* [flannel-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-344000" primary control-plane node in "flannel-344000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-344000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:54:22.312637    5699 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:54:22.312766    5699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:54:22.312768    5699 out.go:358] Setting ErrFile to fd 2...
	I0917 10:54:22.312771    5699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:54:22.312899    5699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:54:22.314015    5699 out.go:352] Setting JSON to false
	I0917 10:54:22.330357    5699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5025,"bootTime":1726590637,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:54:22.330429    5699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:54:22.335789    5699 out.go:177] * [flannel-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:54:22.343703    5699 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:54:22.343737    5699 notify.go:220] Checking for updates...
	I0917 10:54:22.350596    5699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:54:22.353624    5699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:54:22.356659    5699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:54:22.359590    5699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:54:22.362635    5699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:54:22.365939    5699 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:54:22.366003    5699 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:54:22.366051    5699 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:54:22.370632    5699 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:54:22.377681    5699 start.go:297] selected driver: qemu2
	I0917 10:54:22.377687    5699 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:54:22.377692    5699 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:54:22.379890    5699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:54:22.382676    5699 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:54:22.385748    5699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:54:22.385762    5699 cni.go:84] Creating CNI manager for "flannel"
	I0917 10:54:22.385765    5699 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0917 10:54:22.385797    5699 start.go:340] cluster config:
	{Name:flannel-344000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:54:22.389267    5699 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:54:22.404605    5699 out.go:177] * Starting "flannel-344000" primary control-plane node in "flannel-344000" cluster
	I0917 10:54:22.408654    5699 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:54:22.408667    5699 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:54:22.408673    5699 cache.go:56] Caching tarball of preloaded images
	I0917 10:54:22.408728    5699 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:54:22.408733    5699 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:54:22.408787    5699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/flannel-344000/config.json ...
	I0917 10:54:22.408797    5699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/flannel-344000/config.json: {Name:mk43c8d5ff6d0e0ea68ef7f75d7e84794a4a6e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:54:22.409011    5699 start.go:360] acquireMachinesLock for flannel-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:54:22.409041    5699 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "flannel-344000"
	I0917 10:54:22.409053    5699 start.go:93] Provisioning new machine with config: &{Name:flannel-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:54:22.409076    5699 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:54:22.416678    5699 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:54:22.431945    5699 start.go:159] libmachine.API.Create for "flannel-344000" (driver="qemu2")
	I0917 10:54:22.431986    5699 client.go:168] LocalClient.Create starting
	I0917 10:54:22.432052    5699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:54:22.432085    5699 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:22.432099    5699 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:22.432138    5699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:54:22.432169    5699 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:22.432177    5699 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:22.432526    5699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:54:22.597104    5699 main.go:141] libmachine: Creating SSH key...
	I0917 10:54:22.686018    5699 main.go:141] libmachine: Creating Disk image...
	I0917 10:54:22.686029    5699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:54:22.686229    5699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/disk.qcow2
	I0917 10:54:22.695562    5699 main.go:141] libmachine: STDOUT: 
	I0917 10:54:22.695580    5699 main.go:141] libmachine: STDERR: 
	I0917 10:54:22.695638    5699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/disk.qcow2 +20000M
	I0917 10:54:22.703712    5699 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:54:22.703728    5699 main.go:141] libmachine: STDERR: 
	I0917 10:54:22.703742    5699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/disk.qcow2
	I0917 10:54:22.703747    5699 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:54:22.703765    5699 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:54:22.703793    5699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:cc:b0:42:8d:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/disk.qcow2
	I0917 10:54:22.705437    5699 main.go:141] libmachine: STDOUT: 
	I0917 10:54:22.705451    5699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:54:22.705474    5699 client.go:171] duration metric: took 273.49ms to LocalClient.Create
	I0917 10:54:24.707617    5699 start.go:128] duration metric: took 2.298587875s to createHost
	I0917 10:54:24.707686    5699 start.go:83] releasing machines lock for "flannel-344000", held for 2.298705584s
	W0917 10:54:24.707763    5699 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:24.717436    5699 out.go:177] * Deleting "flannel-344000" in qemu2 ...
	W0917 10:54:24.750523    5699 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:24.750546    5699 start.go:729] Will try again in 5 seconds ...
	I0917 10:54:29.752566    5699 start.go:360] acquireMachinesLock for flannel-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:54:29.752881    5699 start.go:364] duration metric: took 267.708µs to acquireMachinesLock for "flannel-344000"
	I0917 10:54:29.752965    5699 start.go:93] Provisioning new machine with config: &{Name:flannel-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:54:29.753083    5699 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:54:29.762512    5699 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:54:29.801122    5699 start.go:159] libmachine.API.Create for "flannel-344000" (driver="qemu2")
	I0917 10:54:29.801166    5699 client.go:168] LocalClient.Create starting
	I0917 10:54:29.801270    5699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:54:29.801329    5699 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:29.801347    5699 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:29.801408    5699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:54:29.801452    5699 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:29.801468    5699 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:29.801956    5699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:54:29.968906    5699 main.go:141] libmachine: Creating SSH key...
	I0917 10:54:30.011536    5699 main.go:141] libmachine: Creating Disk image...
	I0917 10:54:30.011542    5699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:54:30.011735    5699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/disk.qcow2
	I0917 10:54:30.020964    5699 main.go:141] libmachine: STDOUT: 
	I0917 10:54:30.020984    5699 main.go:141] libmachine: STDERR: 
	I0917 10:54:30.021042    5699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/disk.qcow2 +20000M
	I0917 10:54:30.028871    5699 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:54:30.028883    5699 main.go:141] libmachine: STDERR: 
	I0917 10:54:30.028898    5699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/disk.qcow2
	I0917 10:54:30.028902    5699 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:54:30.028911    5699 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:54:30.028941    5699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:20:16:f7:f4:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/flannel-344000/disk.qcow2
	I0917 10:54:30.030505    5699 main.go:141] libmachine: STDOUT: 
	I0917 10:54:30.030517    5699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:54:30.030530    5699 client.go:171] duration metric: took 229.36725ms to LocalClient.Create
	I0917 10:54:32.032641    5699 start.go:128] duration metric: took 2.279598791s to createHost
	I0917 10:54:32.032701    5699 start.go:83] releasing machines lock for "flannel-344000", held for 2.279874166s
	W0917 10:54:32.033014    5699 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:32.041360    5699 out.go:201] 
	W0917 10:54:32.047584    5699 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:54:32.047626    5699 out.go:270] * 
	* 
	W0917 10:54:32.049470    5699 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:54:32.058364    5699 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.907310125s)

                                                
                                                
-- stdout --
	* [enable-default-cni-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-344000" primary control-plane node in "enable-default-cni-344000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-344000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:54:34.473868    5819 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:54:34.474018    5819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:54:34.474023    5819 out.go:358] Setting ErrFile to fd 2...
	I0917 10:54:34.474026    5819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:54:34.474173    5819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:54:34.475326    5819 out.go:352] Setting JSON to false
	I0917 10:54:34.491952    5819 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5037,"bootTime":1726590637,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:54:34.492027    5819 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:54:34.497898    5819 out.go:177] * [enable-default-cni-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:54:34.506019    5819 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:54:34.506075    5819 notify.go:220] Checking for updates...
	I0917 10:54:34.513987    5819 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:54:34.516979    5819 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:54:34.519922    5819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:54:34.522972    5819 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:54:34.526024    5819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:54:34.527787    5819 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:54:34.527854    5819 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:54:34.527895    5819 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:54:34.533946    5819 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:54:34.541967    5819 start.go:297] selected driver: qemu2
	I0917 10:54:34.541974    5819 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:54:34.541982    5819 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:54:34.544190    5819 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:54:34.548022    5819 out.go:177] * Automatically selected the socket_vmnet network
	E0917 10:54:34.549655    5819 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0917 10:54:34.549669    5819 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:54:34.549691    5819 cni.go:84] Creating CNI manager for "bridge"
	I0917 10:54:34.549695    5819 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:54:34.549726    5819 start.go:340] cluster config:
	{Name:enable-default-cni-344000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:54:34.553100    5819 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:54:34.560007    5819 out.go:177] * Starting "enable-default-cni-344000" primary control-plane node in "enable-default-cni-344000" cluster
	I0917 10:54:34.563909    5819 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:54:34.563923    5819 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:54:34.563930    5819 cache.go:56] Caching tarball of preloaded images
	I0917 10:54:34.563995    5819 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:54:34.564001    5819 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:54:34.564056    5819 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/enable-default-cni-344000/config.json ...
	I0917 10:54:34.564065    5819 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/enable-default-cni-344000/config.json: {Name:mkf4707044ea10bbbf774f22073b97de8e8e8529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:54:34.564355    5819 start.go:360] acquireMachinesLock for enable-default-cni-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:54:34.564390    5819 start.go:364] duration metric: took 28.041µs to acquireMachinesLock for "enable-default-cni-344000"
	I0917 10:54:34.564399    5819 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:54:34.564437    5819 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:54:34.568066    5819 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:54:34.583350    5819 start.go:159] libmachine.API.Create for "enable-default-cni-344000" (driver="qemu2")
	I0917 10:54:34.583378    5819 client.go:168] LocalClient.Create starting
	I0917 10:54:34.583441    5819 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:54:34.583473    5819 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:34.583482    5819 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:34.583518    5819 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:54:34.583544    5819 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:34.583551    5819 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:34.583902    5819 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:54:34.746325    5819 main.go:141] libmachine: Creating SSH key...
	I0917 10:54:34.924264    5819 main.go:141] libmachine: Creating Disk image...
	I0917 10:54:34.924273    5819 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:54:34.924506    5819 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/disk.qcow2
	I0917 10:54:34.934242    5819 main.go:141] libmachine: STDOUT: 
	I0917 10:54:34.934267    5819 main.go:141] libmachine: STDERR: 
	I0917 10:54:34.934327    5819 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/disk.qcow2 +20000M
	I0917 10:54:34.942953    5819 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:54:34.942975    5819 main.go:141] libmachine: STDERR: 
	I0917 10:54:34.942998    5819 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/disk.qcow2
	I0917 10:54:34.943007    5819 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:54:34.943021    5819 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:54:34.943058    5819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:61:a2:b7:e9:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/disk.qcow2
	I0917 10:54:34.944873    5819 main.go:141] libmachine: STDOUT: 
	I0917 10:54:34.944888    5819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:54:34.944909    5819 client.go:171] duration metric: took 361.536416ms to LocalClient.Create
	I0917 10:54:36.947087    5819 start.go:128] duration metric: took 2.382691709s to createHost
	I0917 10:54:36.947180    5819 start.go:83] releasing machines lock for "enable-default-cni-344000", held for 2.382853833s
	W0917 10:54:36.947252    5819 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:36.958774    5819 out.go:177] * Deleting "enable-default-cni-344000" in qemu2 ...
	W0917 10:54:36.993496    5819 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:36.993591    5819 start.go:729] Will try again in 5 seconds ...
	I0917 10:54:41.995624    5819 start.go:360] acquireMachinesLock for enable-default-cni-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:54:41.996233    5819 start.go:364] duration metric: took 517.125µs to acquireMachinesLock for "enable-default-cni-344000"
	I0917 10:54:41.996383    5819 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:54:41.996677    5819 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:54:42.006196    5819 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:54:42.058736    5819 start.go:159] libmachine.API.Create for "enable-default-cni-344000" (driver="qemu2")
	I0917 10:54:42.058790    5819 client.go:168] LocalClient.Create starting
	I0917 10:54:42.058913    5819 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:54:42.058980    5819 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:42.058998    5819 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:42.059058    5819 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:54:42.059122    5819 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:42.059136    5819 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:42.059646    5819 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:54:42.235160    5819 main.go:141] libmachine: Creating SSH key...
	I0917 10:54:42.288241    5819 main.go:141] libmachine: Creating Disk image...
	I0917 10:54:42.288247    5819 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:54:42.288441    5819 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/disk.qcow2
	I0917 10:54:42.297706    5819 main.go:141] libmachine: STDOUT: 
	I0917 10:54:42.297724    5819 main.go:141] libmachine: STDERR: 
	I0917 10:54:42.297797    5819 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/disk.qcow2 +20000M
	I0917 10:54:42.305761    5819 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:54:42.305782    5819 main.go:141] libmachine: STDERR: 
	I0917 10:54:42.305796    5819 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/disk.qcow2
	I0917 10:54:42.305802    5819 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:54:42.305810    5819 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:54:42.305835    5819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:19:5c:14:0f:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/enable-default-cni-344000/disk.qcow2
	I0917 10:54:42.307494    5819 main.go:141] libmachine: STDOUT: 
	I0917 10:54:42.307510    5819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:54:42.307522    5819 client.go:171] duration metric: took 248.732875ms to LocalClient.Create
	I0917 10:54:44.309665    5819 start.go:128] duration metric: took 2.313021292s to createHost
	I0917 10:54:44.309777    5819 start.go:83] releasing machines lock for "enable-default-cni-344000", held for 2.313589417s
	W0917 10:54:44.310130    5819 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:44.318923    5819 out.go:201] 
	W0917 10:54:44.326080    5819 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:54:44.326111    5819 out.go:270] * 
	* 
	W0917 10:54:44.328787    5819 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:54:44.336976    5819 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.881066042s)

                                                
                                                
-- stdout --
	* [bridge-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-344000" primary control-plane node in "bridge-344000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-344000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:54:46.558103    5931 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:54:46.558337    5931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:54:46.558341    5931 out.go:358] Setting ErrFile to fd 2...
	I0917 10:54:46.558344    5931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:54:46.558472    5931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:54:46.559815    5931 out.go:352] Setting JSON to false
	I0917 10:54:46.576293    5931 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5049,"bootTime":1726590637,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:54:46.576364    5931 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:54:46.581179    5931 out.go:177] * [bridge-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:54:46.588988    5931 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:54:46.589034    5931 notify.go:220] Checking for updates...
	I0917 10:54:46.598974    5931 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:54:46.602008    5931 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:54:46.605019    5931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:54:46.608024    5931 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:54:46.610930    5931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:54:46.614284    5931 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:54:46.614348    5931 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:54:46.614397    5931 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:54:46.616982    5931 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:54:46.623951    5931 start.go:297] selected driver: qemu2
	I0917 10:54:46.623957    5931 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:54:46.623962    5931 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:54:46.626072    5931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:54:46.627335    5931 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:54:46.630050    5931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:54:46.630072    5931 cni.go:84] Creating CNI manager for "bridge"
	I0917 10:54:46.630076    5931 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:54:46.630108    5931 start.go:340] cluster config:
	{Name:bridge-344000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:54:46.633516    5931 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:54:46.640948    5931 out.go:177] * Starting "bridge-344000" primary control-plane node in "bridge-344000" cluster
	I0917 10:54:46.644988    5931 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:54:46.645005    5931 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:54:46.645014    5931 cache.go:56] Caching tarball of preloaded images
	I0917 10:54:46.645085    5931 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:54:46.645090    5931 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:54:46.645147    5931 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/bridge-344000/config.json ...
	I0917 10:54:46.645162    5931 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/bridge-344000/config.json: {Name:mk328aca92c63e93e0a9ec850244cf14819d59b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:54:46.645355    5931 start.go:360] acquireMachinesLock for bridge-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:54:46.645384    5931 start.go:364] duration metric: took 24µs to acquireMachinesLock for "bridge-344000"
	I0917 10:54:46.645393    5931 start.go:93] Provisioning new machine with config: &{Name:bridge-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:54:46.645418    5931 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:54:46.652983    5931 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:54:46.668474    5931 start.go:159] libmachine.API.Create for "bridge-344000" (driver="qemu2")
	I0917 10:54:46.668506    5931 client.go:168] LocalClient.Create starting
	I0917 10:54:46.668574    5931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:54:46.668608    5931 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:46.668616    5931 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:46.668657    5931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:54:46.668683    5931 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:46.668693    5931 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:46.669046    5931 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:54:46.831566    5931 main.go:141] libmachine: Creating SSH key...
	I0917 10:54:46.863565    5931 main.go:141] libmachine: Creating Disk image...
	I0917 10:54:46.863571    5931 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:54:46.863759    5931 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/disk.qcow2
	I0917 10:54:46.872920    5931 main.go:141] libmachine: STDOUT: 
	I0917 10:54:46.872937    5931 main.go:141] libmachine: STDERR: 
	I0917 10:54:46.873003    5931 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/disk.qcow2 +20000M
	I0917 10:54:46.881057    5931 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:54:46.881072    5931 main.go:141] libmachine: STDERR: 
	I0917 10:54:46.881086    5931 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/disk.qcow2
	I0917 10:54:46.881100    5931 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:54:46.881114    5931 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:54:46.881149    5931 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:e0:a7:20:f3:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/disk.qcow2
	I0917 10:54:46.882820    5931 main.go:141] libmachine: STDOUT: 
	I0917 10:54:46.882836    5931 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:54:46.882856    5931 client.go:171] duration metric: took 214.351917ms to LocalClient.Create
	I0917 10:54:48.885023    5931 start.go:128] duration metric: took 2.239637375s to createHost
	I0917 10:54:48.885116    5931 start.go:83] releasing machines lock for "bridge-344000", held for 2.239791916s
	W0917 10:54:48.885224    5931 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:48.895382    5931 out.go:177] * Deleting "bridge-344000" in qemu2 ...
	W0917 10:54:48.928830    5931 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:48.928853    5931 start.go:729] Will try again in 5 seconds ...
	I0917 10:54:53.930967    5931 start.go:360] acquireMachinesLock for bridge-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:54:53.931607    5931 start.go:364] duration metric: took 501.916µs to acquireMachinesLock for "bridge-344000"
	I0917 10:54:53.931690    5931 start.go:93] Provisioning new machine with config: &{Name:bridge-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:54:53.932005    5931 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:54:53.937630    5931 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:54:53.986689    5931 start.go:159] libmachine.API.Create for "bridge-344000" (driver="qemu2")
	I0917 10:54:53.986739    5931 client.go:168] LocalClient.Create starting
	I0917 10:54:53.986873    5931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:54:53.986951    5931 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:53.986978    5931 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:53.987053    5931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:54:53.987105    5931 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:53.987117    5931 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:53.987874    5931 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:54:54.160839    5931 main.go:141] libmachine: Creating SSH key...
	I0917 10:54:54.350787    5931 main.go:141] libmachine: Creating Disk image...
	I0917 10:54:54.350797    5931 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:54:54.351012    5931 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/disk.qcow2
	I0917 10:54:54.361045    5931 main.go:141] libmachine: STDOUT: 
	I0917 10:54:54.361072    5931 main.go:141] libmachine: STDERR: 
	I0917 10:54:54.361136    5931 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/disk.qcow2 +20000M
	I0917 10:54:54.369571    5931 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:54:54.369596    5931 main.go:141] libmachine: STDERR: 
	I0917 10:54:54.369609    5931 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/disk.qcow2
	I0917 10:54:54.369615    5931 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:54:54.369624    5931 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:54:54.369661    5931 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:44:7c:78:c1:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/bridge-344000/disk.qcow2
	I0917 10:54:54.371334    5931 main.go:141] libmachine: STDOUT: 
	I0917 10:54:54.371351    5931 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:54:54.371366    5931 client.go:171] duration metric: took 384.632458ms to LocalClient.Create
	I0917 10:54:56.373415    5931 start.go:128] duration metric: took 2.441458667s to createHost
	I0917 10:54:56.373443    5931 start.go:83] releasing machines lock for "bridge-344000", held for 2.441885208s
	W0917 10:54:56.373605    5931 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:54:56.382867    5931 out.go:201] 
	W0917 10:54:56.389949    5931 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:54:56.389956    5931 out.go:270] * 
	* 
	W0917 10:54:56.390703    5931 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:54:56.401918    5931 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-344000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.782903417s)

                                                
                                                
-- stdout --
	* [kubenet-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-344000" primary control-plane node in "kubenet-344000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-344000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:54:58.567271    6040 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:54:58.567397    6040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:54:58.567400    6040 out.go:358] Setting ErrFile to fd 2...
	I0917 10:54:58.567407    6040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:54:58.567546    6040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:54:58.568710    6040 out.go:352] Setting JSON to false
	I0917 10:54:58.584946    6040 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5061,"bootTime":1726590637,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:54:58.585036    6040 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:54:58.590745    6040 out.go:177] * [kubenet-344000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:54:58.597594    6040 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:54:58.597693    6040 notify.go:220] Checking for updates...
	I0917 10:54:58.604579    6040 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:54:58.607592    6040 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:54:58.610558    6040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:54:58.613559    6040 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:54:58.616580    6040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:54:58.619905    6040 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:54:58.619969    6040 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:54:58.620022    6040 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:54:58.622493    6040 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:54:58.629637    6040 start.go:297] selected driver: qemu2
	I0917 10:54:58.629646    6040 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:54:58.629654    6040 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:54:58.631762    6040 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:54:58.633316    6040 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:54:58.636600    6040 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:54:58.636617    6040 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0917 10:54:58.636643    6040 start.go:340] cluster config:
	{Name:kubenet-344000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:54:58.640075    6040 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:54:58.647564    6040 out.go:177] * Starting "kubenet-344000" primary control-plane node in "kubenet-344000" cluster
	I0917 10:54:58.651605    6040 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:54:58.651620    6040 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:54:58.651628    6040 cache.go:56] Caching tarball of preloaded images
	I0917 10:54:58.651693    6040 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:54:58.651698    6040 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:54:58.651759    6040 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/kubenet-344000/config.json ...
	I0917 10:54:58.651771    6040 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/kubenet-344000/config.json: {Name:mkd88a55d2620194dccd9f59e188e7da9f996449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:54:58.651984    6040 start.go:360] acquireMachinesLock for kubenet-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:54:58.652014    6040 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "kubenet-344000"
	I0917 10:54:58.652024    6040 start.go:93] Provisioning new machine with config: &{Name:kubenet-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:54:58.652046    6040 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:54:58.659551    6040 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:54:58.674681    6040 start.go:159] libmachine.API.Create for "kubenet-344000" (driver="qemu2")
	I0917 10:54:58.674714    6040 client.go:168] LocalClient.Create starting
	I0917 10:54:58.674798    6040 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:54:58.674833    6040 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:58.674843    6040 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:58.674882    6040 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:54:58.674908    6040 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:58.674917    6040 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:58.675259    6040 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:54:58.841086    6040 main.go:141] libmachine: Creating SSH key...
	I0917 10:54:58.875411    6040 main.go:141] libmachine: Creating Disk image...
	I0917 10:54:58.875420    6040 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:54:58.875611    6040 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/disk.qcow2
	I0917 10:54:58.884939    6040 main.go:141] libmachine: STDOUT: 
	I0917 10:54:58.884955    6040 main.go:141] libmachine: STDERR: 
	I0917 10:54:58.885034    6040 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/disk.qcow2 +20000M
	I0917 10:54:58.893365    6040 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:54:58.893382    6040 main.go:141] libmachine: STDERR: 
	I0917 10:54:58.893396    6040 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/disk.qcow2
	I0917 10:54:58.893403    6040 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:54:58.893414    6040 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:54:58.893441    6040 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:a4:e5:29:a3:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/disk.qcow2
	I0917 10:54:58.895074    6040 main.go:141] libmachine: STDOUT: 
	I0917 10:54:58.895096    6040 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:54:58.895117    6040 client.go:171] duration metric: took 220.402708ms to LocalClient.Create
	I0917 10:55:00.897275    6040 start.go:128] duration metric: took 2.245275459s to createHost
	I0917 10:55:00.897355    6040 start.go:83] releasing machines lock for "kubenet-344000", held for 2.245402958s
	W0917 10:55:00.897398    6040 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:00.911499    6040 out.go:177] * Deleting "kubenet-344000" in qemu2 ...
	W0917 10:55:00.940859    6040 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:00.940893    6040 start.go:729] Will try again in 5 seconds ...
	I0917 10:55:05.942858    6040 start.go:360] acquireMachinesLock for kubenet-344000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:05.943412    6040 start.go:364] duration metric: took 441.709µs to acquireMachinesLock for "kubenet-344000"
	I0917 10:55:05.943578    6040 start.go:93] Provisioning new machine with config: &{Name:kubenet-344000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-344000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:55:05.943895    6040 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:55:05.949714    6040 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 10:55:06.000167    6040 start.go:159] libmachine.API.Create for "kubenet-344000" (driver="qemu2")
	I0917 10:55:06.000229    6040 client.go:168] LocalClient.Create starting
	I0917 10:55:06.000366    6040 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:55:06.000432    6040 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:06.000449    6040 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:06.000528    6040 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:55:06.000577    6040 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:06.000588    6040 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:06.001130    6040 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:55:06.177912    6040 main.go:141] libmachine: Creating SSH key...
	I0917 10:55:06.262907    6040 main.go:141] libmachine: Creating Disk image...
	I0917 10:55:06.262913    6040 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:55:06.263130    6040 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/disk.qcow2
	I0917 10:55:06.272573    6040 main.go:141] libmachine: STDOUT: 
	I0917 10:55:06.272592    6040 main.go:141] libmachine: STDERR: 
	I0917 10:55:06.272664    6040 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/disk.qcow2 +20000M
	I0917 10:55:06.280692    6040 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:55:06.280718    6040 main.go:141] libmachine: STDERR: 
	I0917 10:55:06.280735    6040 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/disk.qcow2
	I0917 10:55:06.280747    6040 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:55:06.280757    6040 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:06.280787    6040 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:54:8a:3b:2f:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/kubenet-344000/disk.qcow2
	I0917 10:55:06.282556    6040 main.go:141] libmachine: STDOUT: 
	I0917 10:55:06.282571    6040 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:06.282585    6040 client.go:171] duration metric: took 282.3595ms to LocalClient.Create
	I0917 10:55:08.284715    6040 start.go:128] duration metric: took 2.34086325s to createHost
	I0917 10:55:08.284779    6040 start.go:83] releasing machines lock for "kubenet-344000", held for 2.341409625s
	W0917 10:55:08.285088    6040 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:08.295769    6040 out.go:201] 
	W0917 10:55:08.299919    6040 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:55:08.299937    6040 out.go:270] * 
	* 
	W0917 10:55:08.301611    6040 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:55:08.308872    6040 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-842000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-842000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.904995s)

                                                
                                                
-- stdout --
	* [old-k8s-version-842000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-842000" primary control-plane node in "old-k8s-version-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:55:10.527414    6153 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:55:10.527600    6153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:10.527604    6153 out.go:358] Setting ErrFile to fd 2...
	I0917 10:55:10.527606    6153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:10.527739    6153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:55:10.528935    6153 out.go:352] Setting JSON to false
	I0917 10:55:10.545203    6153 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5073,"bootTime":1726590637,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:55:10.545280    6153 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:55:10.550909    6153 out.go:177] * [old-k8s-version-842000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:55:10.558870    6153 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:55:10.558912    6153 notify.go:220] Checking for updates...
	I0917 10:55:10.565803    6153 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:55:10.568851    6153 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:55:10.571865    6153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:55:10.576786    6153 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:55:10.580567    6153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:55:10.585179    6153 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:55:10.585257    6153 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:55:10.585307    6153 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:55:10.591904    6153 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:55:10.603798    6153 start.go:297] selected driver: qemu2
	I0917 10:55:10.603805    6153 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:55:10.603820    6153 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:55:10.606147    6153 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:55:10.610826    6153 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:55:10.613951    6153 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:55:10.613972    6153 cni.go:84] Creating CNI manager for ""
	I0917 10:55:10.613997    6153 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 10:55:10.614032    6153 start.go:340] cluster config:
	{Name:old-k8s-version-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:55:10.617789    6153 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:10.625826    6153 out.go:177] * Starting "old-k8s-version-842000" primary control-plane node in "old-k8s-version-842000" cluster
	I0917 10:55:10.629719    6153 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 10:55:10.629733    6153 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 10:55:10.629742    6153 cache.go:56] Caching tarball of preloaded images
	I0917 10:55:10.629805    6153 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:55:10.629810    6153 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 10:55:10.629869    6153 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/old-k8s-version-842000/config.json ...
	I0917 10:55:10.629880    6153 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/old-k8s-version-842000/config.json: {Name:mk34ef33cdf9b49ddfb6bef6edf3077618e4562f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:55:10.630103    6153 start.go:360] acquireMachinesLock for old-k8s-version-842000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:10.630139    6153 start.go:364] duration metric: took 28.541µs to acquireMachinesLock for "old-k8s-version-842000"
	I0917 10:55:10.630149    6153 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:55:10.630178    6153 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:55:10.637709    6153 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:55:10.654188    6153 start.go:159] libmachine.API.Create for "old-k8s-version-842000" (driver="qemu2")
	I0917 10:55:10.654220    6153 client.go:168] LocalClient.Create starting
	I0917 10:55:10.654290    6153 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:55:10.654321    6153 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:10.654329    6153 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:10.654372    6153 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:55:10.654399    6153 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:10.654407    6153 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:10.654847    6153 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:55:10.821397    6153 main.go:141] libmachine: Creating SSH key...
	I0917 10:55:10.975405    6153 main.go:141] libmachine: Creating Disk image...
	I0917 10:55:10.975414    6153 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:55:10.975641    6153 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2
	I0917 10:55:10.985197    6153 main.go:141] libmachine: STDOUT: 
	I0917 10:55:10.985217    6153 main.go:141] libmachine: STDERR: 
	I0917 10:55:10.985291    6153 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2 +20000M
	I0917 10:55:10.993459    6153 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:55:10.993474    6153 main.go:141] libmachine: STDERR: 
	I0917 10:55:10.993492    6153 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2
	I0917 10:55:10.993498    6153 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:55:10.993509    6153 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:10.993537    6153 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:a3:70:5e:53:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2
	I0917 10:55:10.995217    6153 main.go:141] libmachine: STDOUT: 
	I0917 10:55:10.995231    6153 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:10.995249    6153 client.go:171] duration metric: took 341.034625ms to LocalClient.Create
	I0917 10:55:12.997372    6153 start.go:128] duration metric: took 2.367241125s to createHost
	I0917 10:55:12.997460    6153 start.go:83] releasing machines lock for "old-k8s-version-842000", held for 2.367384542s
	W0917 10:55:12.997508    6153 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:13.008844    6153 out.go:177] * Deleting "old-k8s-version-842000" in qemu2 ...
	W0917 10:55:13.044374    6153 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:13.044397    6153 start.go:729] Will try again in 5 seconds ...
	I0917 10:55:18.046428    6153 start.go:360] acquireMachinesLock for old-k8s-version-842000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:18.047148    6153 start.go:364] duration metric: took 575.667µs to acquireMachinesLock for "old-k8s-version-842000"
	I0917 10:55:18.047322    6153 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:55:18.047652    6153 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:55:18.053511    6153 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:55:18.103267    6153 start.go:159] libmachine.API.Create for "old-k8s-version-842000" (driver="qemu2")
	I0917 10:55:18.103325    6153 client.go:168] LocalClient.Create starting
	I0917 10:55:18.103459    6153 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:55:18.103521    6153 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:18.103540    6153 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:18.103596    6153 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:55:18.103639    6153 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:18.103651    6153 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:18.104368    6153 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:55:18.273818    6153 main.go:141] libmachine: Creating SSH key...
	I0917 10:55:18.347444    6153 main.go:141] libmachine: Creating Disk image...
	I0917 10:55:18.347449    6153 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:55:18.347640    6153 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2
	I0917 10:55:18.356844    6153 main.go:141] libmachine: STDOUT: 
	I0917 10:55:18.356863    6153 main.go:141] libmachine: STDERR: 
	I0917 10:55:18.356915    6153 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2 +20000M
	I0917 10:55:18.364828    6153 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:55:18.364842    6153 main.go:141] libmachine: STDERR: 
	I0917 10:55:18.364853    6153 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2
	I0917 10:55:18.364858    6153 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:55:18.364865    6153 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:18.364896    6153 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:a8:30:5a:64:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2
	I0917 10:55:18.366490    6153 main.go:141] libmachine: STDOUT: 
	I0917 10:55:18.366506    6153 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:18.366520    6153 client.go:171] duration metric: took 263.197875ms to LocalClient.Create
	I0917 10:55:20.368591    6153 start.go:128] duration metric: took 2.32098675s to createHost
	I0917 10:55:20.368661    6153 start.go:83] releasing machines lock for "old-k8s-version-842000", held for 2.321538333s
	W0917 10:55:20.368918    6153 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:20.376311    6153 out.go:201] 
	W0917 10:55:20.384346    6153 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:55:20.384365    6153 out.go:270] * 
	* 
	W0917 10:55:20.385395    6153 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:55:20.393292    6153 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-842000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000: exit status 7 (38.594792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-842000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-842000 create -f testdata/busybox.yaml: exit status 1 (27.3005ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-842000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-842000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000: exit status 7 (29.773542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-842000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000: exit status 7 (29.382666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-842000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-842000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-842000 describe deploy/metrics-server -n kube-system: exit status 1 (28.090417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-842000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-842000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000: exit status 7 (30.434875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-842000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-842000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.195449625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-842000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-842000" primary control-plane node in "old-k8s-version-842000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-842000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-842000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:55:24.191472    6205 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:55:24.191606    6205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:24.191610    6205 out.go:358] Setting ErrFile to fd 2...
	I0917 10:55:24.191612    6205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:24.191737    6205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:55:24.192723    6205 out.go:352] Setting JSON to false
	I0917 10:55:24.210506    6205 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5087,"bootTime":1726590637,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:55:24.210582    6205 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:55:24.214789    6205 out.go:177] * [old-k8s-version-842000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:55:24.221737    6205 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:55:24.221824    6205 notify.go:220] Checking for updates...
	I0917 10:55:24.229679    6205 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:55:24.232714    6205 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:55:24.235687    6205 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:55:24.238636    6205 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:55:24.241700    6205 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:55:24.247137    6205 config.go:182] Loaded profile config "old-k8s-version-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0917 10:55:24.250635    6205 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 10:55:24.253673    6205 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:55:24.257703    6205 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:55:24.264654    6205 start.go:297] selected driver: qemu2
	I0917 10:55:24.264660    6205 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:55:24.264703    6205 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:55:24.266966    6205 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:55:24.266988    6205 cni.go:84] Creating CNI manager for ""
	I0917 10:55:24.267012    6205 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 10:55:24.267044    6205 start.go:340] cluster config:
	{Name:old-k8s-version-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-842000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:55:24.270357    6205 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:24.275703    6205 out.go:177] * Starting "old-k8s-version-842000" primary control-plane node in "old-k8s-version-842000" cluster
	I0917 10:55:24.279766    6205 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 10:55:24.279789    6205 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 10:55:24.279796    6205 cache.go:56] Caching tarball of preloaded images
	I0917 10:55:24.279865    6205 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:55:24.279871    6205 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 10:55:24.279941    6205 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/old-k8s-version-842000/config.json ...
	I0917 10:55:24.280436    6205 start.go:360] acquireMachinesLock for old-k8s-version-842000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:24.280464    6205 start.go:364] duration metric: took 21.5µs to acquireMachinesLock for "old-k8s-version-842000"
	I0917 10:55:24.280472    6205 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:55:24.280479    6205 fix.go:54] fixHost starting: 
	I0917 10:55:24.280585    6205 fix.go:112] recreateIfNeeded on old-k8s-version-842000: state=Stopped err=<nil>
	W0917 10:55:24.280593    6205 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:55:24.284661    6205 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-842000" ...
	I0917 10:55:24.292654    6205 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:24.292682    6205 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:a8:30:5a:64:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2
	I0917 10:55:24.294470    6205 main.go:141] libmachine: STDOUT: 
	I0917 10:55:24.294487    6205 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:24.294515    6205 fix.go:56] duration metric: took 14.03725ms for fixHost
	I0917 10:55:24.294520    6205 start.go:83] releasing machines lock for "old-k8s-version-842000", held for 14.052542ms
	W0917 10:55:24.294525    6205 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:55:24.294565    6205 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:24.294569    6205 start.go:729] Will try again in 5 seconds ...
	I0917 10:55:29.296021    6205 start.go:360] acquireMachinesLock for old-k8s-version-842000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:29.296520    6205 start.go:364] duration metric: took 407µs to acquireMachinesLock for "old-k8s-version-842000"
	I0917 10:55:29.296701    6205 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:55:29.296721    6205 fix.go:54] fixHost starting: 
	I0917 10:55:29.297498    6205 fix.go:112] recreateIfNeeded on old-k8s-version-842000: state=Stopped err=<nil>
	W0917 10:55:29.297525    6205 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:55:29.306883    6205 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-842000" ...
	I0917 10:55:29.309928    6205 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:29.310185    6205 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:a8:30:5a:64:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/old-k8s-version-842000/disk.qcow2
	I0917 10:55:29.320096    6205 main.go:141] libmachine: STDOUT: 
	I0917 10:55:29.320183    6205 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:29.320297    6205 fix.go:56] duration metric: took 23.576708ms for fixHost
	I0917 10:55:29.320323    6205 start.go:83] releasing machines lock for "old-k8s-version-842000", held for 23.782666ms
	W0917 10:55:29.320536    6205 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-842000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-842000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:29.327978    6205 out.go:201] 
	W0917 10:55:29.332007    6205 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:55:29.332037    6205 out.go:270] * 
	* 
	W0917 10:55:29.334455    6205 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:55:29.341924    6205 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-842000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000: exit status 7 (62.537708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-842000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000: exit status 7 (31.63725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-842000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-842000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-842000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.057833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-842000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-842000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000: exit status 7 (29.506958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-842000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000: exit status 7 (28.929584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-842000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-842000 --alsologtostderr -v=1: exit status 83 (42.78625ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-842000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-842000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:55:29.604799    6225 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:55:29.605867    6225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:29.605871    6225 out.go:358] Setting ErrFile to fd 2...
	I0917 10:55:29.605873    6225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:29.606053    6225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:55:29.606281    6225 out.go:352] Setting JSON to false
	I0917 10:55:29.606289    6225 mustload.go:65] Loading cluster: old-k8s-version-842000
	I0917 10:55:29.606512    6225 config.go:182] Loaded profile config "old-k8s-version-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0917 10:55:29.611131    6225 out.go:177] * The control-plane node old-k8s-version-842000 host is not running: state=Stopped
	I0917 10:55:29.614051    6225 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-842000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-842000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000: exit status 7 (30.072916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-842000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000: exit status 7 (29.909291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-761000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-761000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.920942416s)

                                                
                                                
-- stdout --
	* [no-preload-761000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-761000" primary control-plane node in "no-preload-761000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-761000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:55:29.925429    6242 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:55:29.925546    6242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:29.925549    6242 out.go:358] Setting ErrFile to fd 2...
	I0917 10:55:29.925552    6242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:29.925672    6242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:55:29.926739    6242 out.go:352] Setting JSON to false
	I0917 10:55:29.942863    6242 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5092,"bootTime":1726590637,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:55:29.942950    6242 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:55:29.946813    6242 out.go:177] * [no-preload-761000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:55:29.955030    6242 notify.go:220] Checking for updates...
	I0917 10:55:29.958840    6242 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:55:29.961857    6242 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:55:29.964904    6242 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:55:29.967882    6242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:55:29.970897    6242 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:55:29.973866    6242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:55:29.977271    6242 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:55:29.977327    6242 config.go:182] Loaded profile config "stopped-upgrade-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 10:55:29.977372    6242 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:55:29.981864    6242 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:55:29.988841    6242 start.go:297] selected driver: qemu2
	I0917 10:55:29.988847    6242 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:55:29.988854    6242 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:55:29.990953    6242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:55:29.994889    6242 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:55:29.997902    6242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:55:29.997915    6242 cni.go:84] Creating CNI manager for ""
	I0917 10:55:29.997935    6242 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:55:29.997941    6242 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:55:29.997969    6242 start.go:340] cluster config:
	{Name:no-preload-761000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:55:30.001361    6242 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:30.008826    6242 out.go:177] * Starting "no-preload-761000" primary control-plane node in "no-preload-761000" cluster
	I0917 10:55:30.011876    6242 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:55:30.011935    6242 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/no-preload-761000/config.json ...
	I0917 10:55:30.011949    6242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/no-preload-761000/config.json: {Name:mkdb6d2e29bfe75a907c123e15b5c2f88cc27604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:55:30.011956    6242 cache.go:107] acquiring lock: {Name:mk931da1dbbf2c2e59821581c317dc5df31663b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:30.011972    6242 cache.go:107] acquiring lock: {Name:mkbe6e2b17dfb0cf5b9b41a0cfe98e86ee312744 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:30.012078    6242 cache.go:107] acquiring lock: {Name:mkabc0356933cf4b0130508599e71d338897b871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:30.012093    6242 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 10:55:30.012113    6242 cache.go:107] acquiring lock: {Name:mka3edc194783e254b182145fadcbb403553614e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:30.012136    6242 cache.go:107] acquiring lock: {Name:mkdc12a93d9deba88b8d1060e8a60dfdaeded8a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:30.012176    6242 cache.go:115] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0917 10:55:30.012182    6242 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.5µs
	I0917 10:55:30.012188    6242 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0917 10:55:30.012106    6242 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 10:55:30.012206    6242 cache.go:107] acquiring lock: {Name:mkb6116f7da9fae1343c5ec2f15ca27329260db8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:30.012208    6242 cache.go:107] acquiring lock: {Name:mk34c6c5d626fa8edbca8ddc03b13aad0f91c621 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:30.012223    6242 cache.go:107] acquiring lock: {Name:mk0343b88250f2aa3071676e25d03257a382cb49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:30.012314    6242 start.go:360] acquireMachinesLock for no-preload-761000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:30.012334    6242 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0917 10:55:30.012347    6242 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "no-preload-761000"
	I0917 10:55:30.012359    6242 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 10:55:30.012350    6242 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0917 10:55:30.012357    6242 start.go:93] Provisioning new machine with config: &{Name:no-preload-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:55:30.012398    6242 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:55:30.012418    6242 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 10:55:30.012437    6242 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 10:55:30.016888    6242 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:55:30.025624    6242 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 10:55:30.026283    6242 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 10:55:30.028251    6242 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0917 10:55:30.028290    6242 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 10:55:30.028438    6242 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 10:55:30.028478    6242 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0917 10:55:30.028625    6242 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 10:55:30.033329    6242 start.go:159] libmachine.API.Create for "no-preload-761000" (driver="qemu2")
	I0917 10:55:30.033353    6242 client.go:168] LocalClient.Create starting
	I0917 10:55:30.033428    6242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:55:30.033460    6242 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:30.033469    6242 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:30.033516    6242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:55:30.033541    6242 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:30.033554    6242 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:30.033940    6242 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:55:30.201943    6242 main.go:141] libmachine: Creating SSH key...
	I0917 10:55:30.279485    6242 main.go:141] libmachine: Creating Disk image...
	I0917 10:55:30.279516    6242 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:55:30.279704    6242 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2
	I0917 10:55:30.289521    6242 main.go:141] libmachine: STDOUT: 
	I0917 10:55:30.289789    6242 main.go:141] libmachine: STDERR: 
	I0917 10:55:30.289866    6242 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2 +20000M
	I0917 10:55:30.298965    6242 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:55:30.299028    6242 main.go:141] libmachine: STDERR: 
	I0917 10:55:30.299051    6242 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2
	I0917 10:55:30.299056    6242 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:55:30.299093    6242 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:30.299119    6242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:2b:09:7c:0d:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2
	I0917 10:55:30.301013    6242 main.go:141] libmachine: STDOUT: 
	I0917 10:55:30.301033    6242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:30.301055    6242 client.go:171] duration metric: took 267.703709ms to LocalClient.Create
	I0917 10:55:30.435020    6242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0917 10:55:30.443990    6242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0917 10:55:30.445222    6242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0917 10:55:30.459131    6242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0917 10:55:30.475418    6242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0917 10:55:30.511002    6242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0917 10:55:30.562981    6242 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0917 10:55:30.562995    6242 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 550.836375ms
	I0917 10:55:30.563003    6242 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0917 10:55:30.569324    6242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0917 10:55:32.301123    6242 start.go:128] duration metric: took 2.288774083s to createHost
	I0917 10:55:32.301171    6242 start.go:83] releasing machines lock for "no-preload-761000", held for 2.288889542s
	W0917 10:55:32.301191    6242 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:32.314161    6242 out.go:177] * Deleting "no-preload-761000" in qemu2 ...
	W0917 10:55:32.334752    6242 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:32.334764    6242 start.go:729] Will try again in 5 seconds ...
	I0917 10:55:32.858656    6242 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0917 10:55:32.858684    6242 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.846634958s
	I0917 10:55:32.858697    6242 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0917 10:55:33.613933    6242 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0917 10:55:33.613970    6242 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 3.601965958s
	I0917 10:55:33.613988    6242 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0917 10:55:34.034231    6242 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0917 10:55:34.034264    6242 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.022186416s
	I0917 10:55:34.034289    6242 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0917 10:55:34.198848    6242 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0917 10:55:34.198892    6242 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.187071708s
	I0917 10:55:34.198909    6242 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0917 10:55:34.406732    6242 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0917 10:55:34.406775    6242 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.39495425s
	I0917 10:55:34.406817    6242 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0917 10:55:37.336401    6242 start.go:360] acquireMachinesLock for no-preload-761000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:37.336741    6242 start.go:364] duration metric: took 276.417µs to acquireMachinesLock for "no-preload-761000"
	I0917 10:55:37.336840    6242 start.go:93] Provisioning new machine with config: &{Name:no-preload-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:55:37.337033    6242 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:55:37.348169    6242 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:55:37.393135    6242 start.go:159] libmachine.API.Create for "no-preload-761000" (driver="qemu2")
	I0917 10:55:37.393194    6242 client.go:168] LocalClient.Create starting
	I0917 10:55:37.393331    6242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:55:37.393395    6242 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:37.393417    6242 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:37.393482    6242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:55:37.393535    6242 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:37.393547    6242 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:37.394042    6242 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:55:37.564040    6242 main.go:141] libmachine: Creating SSH key...
	I0917 10:55:37.763047    6242 main.go:141] libmachine: Creating Disk image...
	I0917 10:55:37.763060    6242 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:55:37.763319    6242 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2
	I0917 10:55:37.773537    6242 main.go:141] libmachine: STDOUT: 
	I0917 10:55:37.773569    6242 main.go:141] libmachine: STDERR: 
	I0917 10:55:37.773640    6242 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2 +20000M
	I0917 10:55:37.782181    6242 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:55:37.782197    6242 main.go:141] libmachine: STDERR: 
	I0917 10:55:37.782209    6242 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2
	I0917 10:55:37.782214    6242 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:55:37.782223    6242 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:37.782259    6242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:43:a5:1b:31:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2
	I0917 10:55:37.784059    6242 main.go:141] libmachine: STDOUT: 
	I0917 10:55:37.784073    6242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:37.784085    6242 client.go:171] duration metric: took 390.899083ms to LocalClient.Create
	I0917 10:55:37.969988    6242 cache.go:157] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0917 10:55:37.970007    6242 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.958179s
	I0917 10:55:37.970016    6242 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0917 10:55:37.970042    6242 cache.go:87] Successfully saved all images to host disk.
	I0917 10:55:39.784206    6242 start.go:128] duration metric: took 2.447228709s to createHost
	I0917 10:55:39.784273    6242 start.go:83] releasing machines lock for "no-preload-761000", held for 2.447564291s
	W0917 10:55:39.784608    6242 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-761000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-761000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:39.788533    6242 out.go:201] 
	W0917 10:55:39.795551    6242 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:55:39.795586    6242 out.go:270] * 
	* 
	W0917 10:55:39.796807    6242 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:55:39.809320    6242 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-761000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000: exit status 7 (45.467166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-761000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-761000 create -f testdata/busybox.yaml: exit status 1 (29.101834ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-761000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-761000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000: exit status 7 (33.081375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-761000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000: exit status 7 (33.681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-238000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-238000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.086265333s)

                                                
                                                
-- stdout --
	* [embed-certs-238000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-238000" primary control-plane node in "embed-certs-238000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-238000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:55:39.966433    6293 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:55:39.966595    6293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:39.966598    6293 out.go:358] Setting ErrFile to fd 2...
	I0917 10:55:39.966601    6293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:39.966740    6293 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:55:39.968177    6293 out.go:352] Setting JSON to false
	I0917 10:55:39.986696    6293 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5102,"bootTime":1726590637,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:55:39.986777    6293 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:55:39.993449    6293 out.go:177] * [embed-certs-238000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:55:40.001377    6293 notify.go:220] Checking for updates...
	I0917 10:55:40.005335    6293 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:55:40.011286    6293 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:55:40.019290    6293 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:55:40.031262    6293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:55:40.039253    6293 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:55:40.047174    6293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:55:40.050643    6293 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:55:40.050708    6293 config.go:182] Loaded profile config "no-preload-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:55:40.050749    6293 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:55:40.055310    6293 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:55:40.062277    6293 start.go:297] selected driver: qemu2
	I0917 10:55:40.062284    6293 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:55:40.062290    6293 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:55:40.064829    6293 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:55:40.069317    6293 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:55:40.073401    6293 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:55:40.073422    6293 cni.go:84] Creating CNI manager for ""
	I0917 10:55:40.073452    6293 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:55:40.073457    6293 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:55:40.073503    6293 start.go:340] cluster config:
	{Name:embed-certs-238000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:55:40.077376    6293 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:40.085282    6293 out.go:177] * Starting "embed-certs-238000" primary control-plane node in "embed-certs-238000" cluster
	I0917 10:55:40.089276    6293 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:55:40.089296    6293 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:55:40.089306    6293 cache.go:56] Caching tarball of preloaded images
	I0917 10:55:40.089385    6293 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:55:40.089392    6293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:55:40.089451    6293 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/embed-certs-238000/config.json ...
	I0917 10:55:40.089462    6293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/embed-certs-238000/config.json: {Name:mkbf613702efdc39ef5ccbf83b13e23acd688cc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:55:40.089768    6293 start.go:360] acquireMachinesLock for embed-certs-238000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:40.089807    6293 start.go:364] duration metric: took 32.709µs to acquireMachinesLock for "embed-certs-238000"
	I0917 10:55:40.089825    6293 start.go:93] Provisioning new machine with config: &{Name:embed-certs-238000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:55:40.089855    6293 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:55:40.093362    6293 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:55:40.109749    6293 start.go:159] libmachine.API.Create for "embed-certs-238000" (driver="qemu2")
	I0917 10:55:40.109785    6293 client.go:168] LocalClient.Create starting
	I0917 10:55:40.109852    6293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:55:40.109888    6293 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:40.109898    6293 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:40.109943    6293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:55:40.109966    6293 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:40.109974    6293 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:40.110340    6293 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:55:40.399735    6293 main.go:141] libmachine: Creating SSH key...
	I0917 10:55:40.571861    6293 main.go:141] libmachine: Creating Disk image...
	I0917 10:55:40.571869    6293 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:55:40.572057    6293 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2
	I0917 10:55:40.581127    6293 main.go:141] libmachine: STDOUT: 
	I0917 10:55:40.581146    6293 main.go:141] libmachine: STDERR: 
	I0917 10:55:40.581205    6293 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2 +20000M
	I0917 10:55:40.589175    6293 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:55:40.589202    6293 main.go:141] libmachine: STDERR: 
	I0917 10:55:40.589219    6293 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2
	I0917 10:55:40.589226    6293 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:55:40.589237    6293 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:40.589270    6293 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:02:b8:b5:3a:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2
	I0917 10:55:40.590872    6293 main.go:141] libmachine: STDOUT: 
	I0917 10:55:40.590886    6293 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:40.590909    6293 client.go:171] duration metric: took 481.133125ms to LocalClient.Create
	I0917 10:55:42.593055    6293 start.go:128] duration metric: took 2.503247083s to createHost
	I0917 10:55:42.593229    6293 start.go:83] releasing machines lock for "embed-certs-238000", held for 2.503388292s
	W0917 10:55:42.593290    6293 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:42.600680    6293 out.go:177] * Deleting "embed-certs-238000" in qemu2 ...
	W0917 10:55:42.641725    6293 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:42.641754    6293 start.go:729] Will try again in 5 seconds ...
	I0917 10:55:47.643758    6293 start.go:360] acquireMachinesLock for embed-certs-238000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:47.644207    6293 start.go:364] duration metric: took 369.042µs to acquireMachinesLock for "embed-certs-238000"
	I0917 10:55:47.644790    6293 start.go:93] Provisioning new machine with config: &{Name:embed-certs-238000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:55:47.645248    6293 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:55:47.665813    6293 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:55:47.715466    6293 start.go:159] libmachine.API.Create for "embed-certs-238000" (driver="qemu2")
	I0917 10:55:47.715516    6293 client.go:168] LocalClient.Create starting
	I0917 10:55:47.715641    6293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:55:47.715705    6293 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:47.715723    6293 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:47.715792    6293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:55:47.715841    6293 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:47.715856    6293 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:47.716516    6293 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:55:47.892448    6293 main.go:141] libmachine: Creating SSH key...
	I0917 10:55:47.947120    6293 main.go:141] libmachine: Creating Disk image...
	I0917 10:55:47.947125    6293 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:55:47.947325    6293 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2
	I0917 10:55:47.956369    6293 main.go:141] libmachine: STDOUT: 
	I0917 10:55:47.956393    6293 main.go:141] libmachine: STDERR: 
	I0917 10:55:47.956446    6293 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2 +20000M
	I0917 10:55:47.964277    6293 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:55:47.964293    6293 main.go:141] libmachine: STDERR: 
	I0917 10:55:47.964303    6293 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2
	I0917 10:55:47.964309    6293 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:55:47.964321    6293 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:47.964348    6293 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:f2:03:63:b3:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2
	I0917 10:55:47.965898    6293 main.go:141] libmachine: STDOUT: 
	I0917 10:55:47.965914    6293 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:47.965927    6293 client.go:171] duration metric: took 250.411584ms to LocalClient.Create
	I0917 10:55:49.968205    6293 start.go:128] duration metric: took 2.32298475s to createHost
	I0917 10:55:49.968284    6293 start.go:83] releasing machines lock for "embed-certs-238000", held for 2.324127334s
	W0917 10:55:49.968574    6293 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-238000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-238000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:49.978386    6293 out.go:201] 
	W0917 10:55:49.995563    6293 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:55:49.995588    6293 out.go:270] * 
	* 
	W0917 10:55:49.998167    6293 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:55:50.007458    6293 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-238000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000: exit status 7 (62.038792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-761000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-761000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-761000 describe deploy/metrics-server -n kube-system: exit status 1 (28.904417ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-761000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-761000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000: exit status 7 (30.921417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-761000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-761000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (7.083621s)

                                                
                                                
-- stdout --
	* [no-preload-761000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-761000" primary control-plane node in "no-preload-761000" cluster
	* Restarting existing qemu2 VM for "no-preload-761000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-761000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:55:42.993354    6331 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:55:42.993530    6331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:42.993533    6331 out.go:358] Setting ErrFile to fd 2...
	I0917 10:55:42.993536    6331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:42.993658    6331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:55:42.994729    6331 out.go:352] Setting JSON to false
	I0917 10:55:43.010742    6331 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5106,"bootTime":1726590637,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:55:43.010809    6331 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:55:43.016208    6331 out.go:177] * [no-preload-761000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:55:43.023064    6331 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:55:43.023103    6331 notify.go:220] Checking for updates...
	I0917 10:55:43.030128    6331 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:55:43.033053    6331 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:55:43.036130    6331 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:55:43.039061    6331 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:55:43.042152    6331 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:55:43.045351    6331 config.go:182] Loaded profile config "no-preload-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:55:43.045608    6331 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:55:43.048955    6331 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:55:43.056092    6331 start.go:297] selected driver: qemu2
	I0917 10:55:43.056101    6331 start.go:901] validating driver "qemu2" against &{Name:no-preload-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:55:43.056170    6331 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:55:43.058382    6331 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:55:43.058413    6331 cni.go:84] Creating CNI manager for ""
	I0917 10:55:43.058441    6331 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:55:43.058481    6331 start.go:340] cluster config:
	{Name:no-preload-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-761000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:55:43.062165    6331 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:43.070042    6331 out.go:177] * Starting "no-preload-761000" primary control-plane node in "no-preload-761000" cluster
	I0917 10:55:43.074127    6331 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:55:43.074203    6331 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/no-preload-761000/config.json ...
	I0917 10:55:43.074224    6331 cache.go:107] acquiring lock: {Name:mkdc12a93d9deba88b8d1060e8a60dfdaeded8a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:43.074235    6331 cache.go:107] acquiring lock: {Name:mk931da1dbbf2c2e59821581c317dc5df31663b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:43.074264    6331 cache.go:107] acquiring lock: {Name:mka3edc194783e254b182145fadcbb403553614e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:43.074261    6331 cache.go:107] acquiring lock: {Name:mkbe6e2b17dfb0cf5b9b41a0cfe98e86ee312744 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:43.074320    6331 cache.go:115] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0917 10:55:43.074327    6331 cache.go:115] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0917 10:55:43.074330    6331 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 107.625µs
	I0917 10:55:43.074332    6331 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 70µs
	I0917 10:55:43.074346    6331 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0917 10:55:43.074343    6331 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0917 10:55:43.074352    6331 cache.go:107] acquiring lock: {Name:mkb6116f7da9fae1343c5ec2f15ca27329260db8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:43.074364    6331 cache.go:107] acquiring lock: {Name:mkabc0356933cf4b0130508599e71d338897b871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:43.074335    6331 cache.go:115] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0917 10:55:43.074356    6331 cache.go:107] acquiring lock: {Name:mk34c6c5d626fa8edbca8ddc03b13aad0f91c621 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:43.074449    6331 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 212.917µs
	I0917 10:55:43.074461    6331 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0917 10:55:43.074388    6331 cache.go:115] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0917 10:55:43.074472    6331 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 257µs
	I0917 10:55:43.074478    6331 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0917 10:55:43.074397    6331 cache.go:115] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0917 10:55:43.074415    6331 cache.go:115] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0917 10:55:43.074486    6331 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 134.917µs
	I0917 10:55:43.074383    6331 cache.go:107] acquiring lock: {Name:mk0343b88250f2aa3071676e25d03257a382cb49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:43.074492    6331 cache.go:115] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0917 10:55:43.074496    6331 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 141.083µs
	I0917 10:55:43.074500    6331 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0917 10:55:43.074491    6331 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0917 10:55:43.074496    6331 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 126.459µs
	I0917 10:55:43.074507    6331 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0917 10:55:43.074532    6331 cache.go:115] /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0917 10:55:43.074537    6331 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 180.542µs
	I0917 10:55:43.074542    6331 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0917 10:55:43.074546    6331 cache.go:87] Successfully saved all images to host disk.
	I0917 10:55:43.074683    6331 start.go:360] acquireMachinesLock for no-preload-761000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:43.074719    6331 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "no-preload-761000"
	I0917 10:55:43.074728    6331 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:55:43.074734    6331 fix.go:54] fixHost starting: 
	I0917 10:55:43.074860    6331 fix.go:112] recreateIfNeeded on no-preload-761000: state=Stopped err=<nil>
	W0917 10:55:43.074870    6331 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:55:43.083106    6331 out.go:177] * Restarting existing qemu2 VM for "no-preload-761000" ...
	I0917 10:55:43.087083    6331 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:43.087125    6331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:43:a5:1b:31:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2
	I0917 10:55:43.089274    6331 main.go:141] libmachine: STDOUT: 
	I0917 10:55:43.089293    6331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:43.089323    6331 fix.go:56] duration metric: took 14.589917ms for fixHost
	I0917 10:55:43.089328    6331 start.go:83] releasing machines lock for "no-preload-761000", held for 14.605208ms
	W0917 10:55:43.089336    6331 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:55:43.089372    6331 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:43.089376    6331 start.go:729] Will try again in 5 seconds ...
	I0917 10:55:48.091336    6331 start.go:360] acquireMachinesLock for no-preload-761000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:49.968444    6331 start.go:364] duration metric: took 1.877098584s to acquireMachinesLock for "no-preload-761000"
	I0917 10:55:49.968601    6331 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:55:49.968617    6331 fix.go:54] fixHost starting: 
	I0917 10:55:49.969438    6331 fix.go:112] recreateIfNeeded on no-preload-761000: state=Stopped err=<nil>
	W0917 10:55:49.969465    6331 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:55:49.991357    6331 out.go:177] * Restarting existing qemu2 VM for "no-preload-761000" ...
	I0917 10:55:49.999296    6331 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:49.999525    6331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:43:a5:1b:31:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/no-preload-761000/disk.qcow2
	I0917 10:55:50.009949    6331 main.go:141] libmachine: STDOUT: 
	I0917 10:55:50.010028    6331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:50.010128    6331 fix.go:56] duration metric: took 41.510417ms for fixHost
	I0917 10:55:50.010157    6331 start.go:83] releasing machines lock for "no-preload-761000", held for 41.650875ms
	W0917 10:55:50.010365    6331 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-761000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-761000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:50.022342    6331 out.go:201] 
	W0917 10:55:50.025779    6331 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:55:50.025811    6331 out.go:270] * 
	* 
	W0917 10:55:50.028130    6331 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:55:50.043395    6331 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-761000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000: exit status 7 (47.508167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-238000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-238000 create -f testdata/busybox.yaml: exit status 1 (31.015541ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-238000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-238000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000: exit status 7 (30.689958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-238000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000: exit status 7 (35.37225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-761000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000: exit status 7 (32.619584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-761000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-761000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-761000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.180708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-761000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-761000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000: exit status 7 (30.913959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-238000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-238000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-238000 describe deploy/metrics-server -n kube-system: exit status 1 (28.775833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-238000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-238000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000: exit status 7 (32.849875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-761000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000: exit status 7 (32.31525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-761000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-761000 --alsologtostderr -v=1: exit status 83 (40.474125ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-761000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-761000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:55:50.303296    6373 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:55:50.303454    6373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:50.303458    6373 out.go:358] Setting ErrFile to fd 2...
	I0917 10:55:50.303461    6373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:50.303590    6373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:55:50.303827    6373 out.go:352] Setting JSON to false
	I0917 10:55:50.303834    6373 mustload.go:65] Loading cluster: no-preload-761000
	I0917 10:55:50.304056    6373 config.go:182] Loaded profile config "no-preload-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:55:50.308388    6373 out.go:177] * The control-plane node no-preload-761000 host is not running: state=Stopped
	I0917 10:55:50.309631    6373 out.go:177]   To start a cluster, run: "minikube start -p no-preload-761000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-761000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000: exit status 7 (30.714375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-761000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000: exit status 7 (27.415666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-080000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-080000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.084667583s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-080000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-080000" primary control-plane node in "default-k8s-diff-port-080000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-080000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:55:50.724809    6403 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:55:50.724929    6403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:50.724932    6403 out.go:358] Setting ErrFile to fd 2...
	I0917 10:55:50.724935    6403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:50.725062    6403 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:55:50.726169    6403 out.go:352] Setting JSON to false
	I0917 10:55:50.742270    6403 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5113,"bootTime":1726590637,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:55:50.742336    6403 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:55:50.747498    6403 out.go:177] * [default-k8s-diff-port-080000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:55:50.754376    6403 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:55:50.754446    6403 notify.go:220] Checking for updates...
	I0917 10:55:50.763320    6403 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:55:50.766375    6403 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:55:50.769381    6403 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:55:50.772431    6403 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:55:50.775408    6403 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:55:50.778672    6403 config.go:182] Loaded profile config "embed-certs-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:55:50.778748    6403 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:55:50.778805    6403 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:55:50.783398    6403 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:55:50.790383    6403 start.go:297] selected driver: qemu2
	I0917 10:55:50.790389    6403 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:55:50.790395    6403 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:55:50.792698    6403 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:55:50.796327    6403 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:55:50.799465    6403 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:55:50.799485    6403 cni.go:84] Creating CNI manager for ""
	I0917 10:55:50.799515    6403 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:55:50.799522    6403 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:55:50.799544    6403 start.go:340] cluster config:
	{Name:default-k8s-diff-port-080000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:55:50.803197    6403 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:50.809384    6403 out.go:177] * Starting "default-k8s-diff-port-080000" primary control-plane node in "default-k8s-diff-port-080000" cluster
	I0917 10:55:50.813375    6403 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:55:50.813391    6403 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:55:50.813400    6403 cache.go:56] Caching tarball of preloaded images
	I0917 10:55:50.813476    6403 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:55:50.813483    6403 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:55:50.813552    6403 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/default-k8s-diff-port-080000/config.json ...
	I0917 10:55:50.813564    6403 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/default-k8s-diff-port-080000/config.json: {Name:mkd16ee124c377811ead33261243317a66f16b3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:55:50.813779    6403 start.go:360] acquireMachinesLock for default-k8s-diff-port-080000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:50.813815    6403 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "default-k8s-diff-port-080000"
	I0917 10:55:50.813826    6403 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:55:50.813850    6403 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:55:50.822389    6403 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:55:50.839595    6403 start.go:159] libmachine.API.Create for "default-k8s-diff-port-080000" (driver="qemu2")
	I0917 10:55:50.839627    6403 client.go:168] LocalClient.Create starting
	I0917 10:55:50.839684    6403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:55:50.839714    6403 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:50.839723    6403 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:50.839758    6403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:55:50.839786    6403 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:50.839793    6403 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:50.840157    6403 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:55:51.039879    6403 main.go:141] libmachine: Creating SSH key...
	I0917 10:55:51.321227    6403 main.go:141] libmachine: Creating Disk image...
	I0917 10:55:51.321239    6403 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:55:51.321483    6403 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2
	I0917 10:55:51.331253    6403 main.go:141] libmachine: STDOUT: 
	I0917 10:55:51.331274    6403 main.go:141] libmachine: STDERR: 
	I0917 10:55:51.331330    6403 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2 +20000M
	I0917 10:55:51.339135    6403 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:55:51.339151    6403 main.go:141] libmachine: STDERR: 
	I0917 10:55:51.339182    6403 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2
	I0917 10:55:51.339188    6403 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:55:51.339201    6403 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:51.339230    6403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:a8:52:ee:e5:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2
	I0917 10:55:51.340823    6403 main.go:141] libmachine: STDOUT: 
	I0917 10:55:51.340838    6403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:51.340859    6403 client.go:171] duration metric: took 501.241417ms to LocalClient.Create
	I0917 10:55:53.342967    6403 start.go:128] duration metric: took 2.5291705s to createHost
	I0917 10:55:53.343037    6403 start.go:83] releasing machines lock for "default-k8s-diff-port-080000", held for 2.52929s
	W0917 10:55:53.343082    6403 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:53.351651    6403 out.go:177] * Deleting "default-k8s-diff-port-080000" in qemu2 ...
	W0917 10:55:53.393120    6403 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:53.393147    6403 start.go:729] Will try again in 5 seconds ...
	I0917 10:55:58.393460    6403 start.go:360] acquireMachinesLock for default-k8s-diff-port-080000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:58.393870    6403 start.go:364] duration metric: took 315.542µs to acquireMachinesLock for "default-k8s-diff-port-080000"
	I0917 10:55:58.394036    6403 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:55:58.394361    6403 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:55:58.403003    6403 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:55:58.453386    6403 start.go:159] libmachine.API.Create for "default-k8s-diff-port-080000" (driver="qemu2")
	I0917 10:55:58.453431    6403 client.go:168] LocalClient.Create starting
	I0917 10:55:58.453553    6403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:55:58.453629    6403 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:58.453647    6403 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:58.453713    6403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:55:58.453762    6403 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:58.453775    6403 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:58.454313    6403 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:55:58.634546    6403 main.go:141] libmachine: Creating SSH key...
	I0917 10:55:58.703634    6403 main.go:141] libmachine: Creating Disk image...
	I0917 10:55:58.703645    6403 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:55:58.703811    6403 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2
	I0917 10:55:58.712898    6403 main.go:141] libmachine: STDOUT: 
	I0917 10:55:58.712912    6403 main.go:141] libmachine: STDERR: 
	I0917 10:55:58.712969    6403 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2 +20000M
	I0917 10:55:58.720711    6403 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:55:58.720731    6403 main.go:141] libmachine: STDERR: 
	I0917 10:55:58.720746    6403 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2
	I0917 10:55:58.720754    6403 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:55:58.720761    6403 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:58.720796    6403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:f0:e8:6b:99:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2
	I0917 10:55:58.722323    6403 main.go:141] libmachine: STDOUT: 
	I0917 10:55:58.722344    6403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:58.722358    6403 client.go:171] duration metric: took 268.928334ms to LocalClient.Create
	I0917 10:56:00.724454    6403 start.go:128] duration metric: took 2.330120583s to createHost
	I0917 10:56:00.724529    6403 start.go:83] releasing machines lock for "default-k8s-diff-port-080000", held for 2.330707125s
	W0917 10:56:00.724823    6403 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-080000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-080000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:56:00.751002    6403 out.go:201] 
	W0917 10:56:00.755219    6403 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:56:00.755266    6403 out.go:270] * 
	* 
	W0917 10:56:00.758174    6403 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:56:00.765134    6403 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-080000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000: exit status 7 (64.556167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-238000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-238000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (7.272378667s)

                                                
                                                
-- stdout --
	* [embed-certs-238000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-238000" primary control-plane node in "embed-certs-238000" cluster
	* Restarting existing qemu2 VM for "embed-certs-238000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-238000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:55:53.563737    6430 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:55:53.563864    6430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:53.563867    6430 out.go:358] Setting ErrFile to fd 2...
	I0917 10:55:53.563877    6430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:53.563996    6430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:55:53.565017    6430 out.go:352] Setting JSON to false
	I0917 10:55:53.580956    6430 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5116,"bootTime":1726590637,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:55:53.581020    6430 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:55:53.586271    6430 out.go:177] * [embed-certs-238000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:55:53.592304    6430 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:55:53.592415    6430 notify.go:220] Checking for updates...
	I0917 10:55:53.599176    6430 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:55:53.602227    6430 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:55:53.605263    6430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:55:53.608254    6430 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:55:53.611227    6430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:55:53.614595    6430 config.go:182] Loaded profile config "embed-certs-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:55:53.614877    6430 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:55:53.619194    6430 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:55:53.626282    6430 start.go:297] selected driver: qemu2
	I0917 10:55:53.626291    6430 start.go:901] validating driver "qemu2" against &{Name:embed-certs-238000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-238000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:55:53.626357    6430 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:55:53.628587    6430 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:55:53.628612    6430 cni.go:84] Creating CNI manager for ""
	I0917 10:55:53.628632    6430 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:55:53.628659    6430 start.go:340] cluster config:
	{Name:embed-certs-238000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-238000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:55:53.632078    6430 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:53.639194    6430 out.go:177] * Starting "embed-certs-238000" primary control-plane node in "embed-certs-238000" cluster
	I0917 10:55:53.642266    6430 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:55:53.642281    6430 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:55:53.642292    6430 cache.go:56] Caching tarball of preloaded images
	I0917 10:55:53.642358    6430 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:55:53.642371    6430 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:55:53.642431    6430 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/embed-certs-238000/config.json ...
	I0917 10:55:53.642944    6430 start.go:360] acquireMachinesLock for embed-certs-238000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:53.642978    6430 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "embed-certs-238000"
	I0917 10:55:53.642986    6430 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:55:53.642991    6430 fix.go:54] fixHost starting: 
	I0917 10:55:53.643109    6430 fix.go:112] recreateIfNeeded on embed-certs-238000: state=Stopped err=<nil>
	W0917 10:55:53.643120    6430 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:55:53.651055    6430 out.go:177] * Restarting existing qemu2 VM for "embed-certs-238000" ...
	I0917 10:55:53.655211    6430 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:55:53.655243    6430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:f2:03:63:b3:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2
	I0917 10:55:53.657218    6430 main.go:141] libmachine: STDOUT: 
	I0917 10:55:53.657236    6430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:55:53.657266    6430 fix.go:56] duration metric: took 14.275417ms for fixHost
	I0917 10:55:53.657271    6430 start.go:83] releasing machines lock for "embed-certs-238000", held for 14.289084ms
	W0917 10:55:53.657276    6430 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:55:53.657317    6430 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:55:53.657322    6430 start.go:729] Will try again in 5 seconds ...
	I0917 10:55:58.659234    6430 start.go:360] acquireMachinesLock for embed-certs-238000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:56:00.724725    6430 start.go:364] duration metric: took 2.065500709s to acquireMachinesLock for "embed-certs-238000"
	I0917 10:56:00.724876    6430 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:56:00.724895    6430 fix.go:54] fixHost starting: 
	I0917 10:56:00.725739    6430 fix.go:112] recreateIfNeeded on embed-certs-238000: state=Stopped err=<nil>
	W0917 10:56:00.725772    6430 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:56:00.750999    6430 out.go:177] * Restarting existing qemu2 VM for "embed-certs-238000" ...
	I0917 10:56:00.755108    6430 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:56:00.755368    6430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:f2:03:63:b3:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/embed-certs-238000/disk.qcow2
	I0917 10:56:00.765366    6430 main.go:141] libmachine: STDOUT: 
	I0917 10:56:00.765424    6430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:56:00.765509    6430 fix.go:56] duration metric: took 40.614584ms for fixHost
	I0917 10:56:00.765523    6430 start.go:83] releasing machines lock for "embed-certs-238000", held for 40.764459ms
	W0917 10:56:00.765732    6430 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-238000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-238000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:56:00.780910    6430 out.go:201] 
	W0917 10:56:00.784462    6430 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:56:00.784489    6430 out.go:270] * 
	* 
	W0917 10:56:00.786343    6430 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:56:00.799266    6430 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-238000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000: exit status 7 (47.332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-080000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-080000 create -f testdata/busybox.yaml: exit status 1 (30.526375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-080000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-080000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000: exit status 7 (31.581417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-080000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000: exit status 7 (33.923542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-238000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000: exit status 7 (33.888916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-238000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-238000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-238000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.153875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-238000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-238000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000: exit status 7 (32.049792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-080000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-080000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-080000 describe deploy/metrics-server -n kube-system: exit status 1 (28.679375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-080000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-080000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000: exit status 7 (32.4425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-238000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000: exit status 7 (32.421917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-238000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-238000 --alsologtostderr -v=1: exit status 83 (44.427167ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-238000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-238000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:56:01.062898    6464 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:56:01.063054    6464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:56:01.063057    6464 out.go:358] Setting ErrFile to fd 2...
	I0917 10:56:01.063060    6464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:56:01.063192    6464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:56:01.063420    6464 out.go:352] Setting JSON to false
	I0917 10:56:01.063427    6464 mustload.go:65] Loading cluster: embed-certs-238000
	I0917 10:56:01.063666    6464 config.go:182] Loaded profile config "embed-certs-238000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:56:01.068074    6464 out.go:177] * The control-plane node embed-certs-238000 host is not running: state=Stopped
	I0917 10:56:01.072020    6464 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-238000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-238000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000: exit status 7 (29.278709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-238000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000: exit status 7 (28.818167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-238000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-929000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-929000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.992570542s)

                                                
                                                
-- stdout --
	* [newest-cni-929000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-929000" primary control-plane node in "newest-cni-929000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-929000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:56:01.375156    6487 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:56:01.375282    6487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:56:01.375285    6487 out.go:358] Setting ErrFile to fd 2...
	I0917 10:56:01.375288    6487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:56:01.375416    6487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:56:01.376481    6487 out.go:352] Setting JSON to false
	I0917 10:56:01.392732    6487 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5124,"bootTime":1726590637,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:56:01.392794    6487 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:56:01.398002    6487 out.go:177] * [newest-cni-929000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:56:01.405162    6487 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:56:01.405219    6487 notify.go:220] Checking for updates...
	I0917 10:56:01.411100    6487 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:56:01.414053    6487 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:56:01.415551    6487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:56:01.419051    6487 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:56:01.422057    6487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:56:01.425432    6487 config.go:182] Loaded profile config "default-k8s-diff-port-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:56:01.425502    6487 config.go:182] Loaded profile config "multinode-404000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:56:01.425556    6487 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:56:01.430055    6487 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 10:56:01.437102    6487 start.go:297] selected driver: qemu2
	I0917 10:56:01.437110    6487 start.go:901] validating driver "qemu2" against <nil>
	I0917 10:56:01.437117    6487 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:56:01.439335    6487 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0917 10:56:01.439378    6487 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0917 10:56:01.443040    6487 out.go:177] * Automatically selected the socket_vmnet network
	I0917 10:56:01.450154    6487 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0917 10:56:01.450173    6487 cni.go:84] Creating CNI manager for ""
	I0917 10:56:01.450204    6487 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:56:01.450210    6487 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:56:01.450248    6487 start.go:340] cluster config:
	{Name:newest-cni-929000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-929000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:56:01.454004    6487 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:56:01.461116    6487 out.go:177] * Starting "newest-cni-929000" primary control-plane node in "newest-cni-929000" cluster
	I0917 10:56:01.464968    6487 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:56:01.464985    6487 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:56:01.464992    6487 cache.go:56] Caching tarball of preloaded images
	I0917 10:56:01.465062    6487 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:56:01.465069    6487 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:56:01.465133    6487 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/newest-cni-929000/config.json ...
	I0917 10:56:01.465145    6487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/newest-cni-929000/config.json: {Name:mk44b55ec41d3fe455d501d8980f315122625e6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:56:01.465538    6487 start.go:360] acquireMachinesLock for newest-cni-929000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:56:01.465574    6487 start.go:364] duration metric: took 30.209µs to acquireMachinesLock for "newest-cni-929000"
	I0917 10:56:01.465585    6487 start.go:93] Provisioning new machine with config: &{Name:newest-cni-929000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-929000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:56:01.465620    6487 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:56:01.469077    6487 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:56:01.486699    6487 start.go:159] libmachine.API.Create for "newest-cni-929000" (driver="qemu2")
	I0917 10:56:01.486736    6487 client.go:168] LocalClient.Create starting
	I0917 10:56:01.486804    6487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:56:01.486834    6487 main.go:141] libmachine: Decoding PEM data...
	I0917 10:56:01.486844    6487 main.go:141] libmachine: Parsing certificate...
	I0917 10:56:01.486885    6487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:56:01.486912    6487 main.go:141] libmachine: Decoding PEM data...
	I0917 10:56:01.486926    6487 main.go:141] libmachine: Parsing certificate...
	I0917 10:56:01.487342    6487 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:56:01.650876    6487 main.go:141] libmachine: Creating SSH key...
	I0917 10:56:01.821427    6487 main.go:141] libmachine: Creating Disk image...
	I0917 10:56:01.821435    6487 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:56:01.821707    6487 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2
	I0917 10:56:01.831136    6487 main.go:141] libmachine: STDOUT: 
	I0917 10:56:01.831149    6487 main.go:141] libmachine: STDERR: 
	I0917 10:56:01.831215    6487 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2 +20000M
	I0917 10:56:01.839224    6487 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:56:01.839237    6487 main.go:141] libmachine: STDERR: 
	I0917 10:56:01.839256    6487 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2
	I0917 10:56:01.839261    6487 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:56:01.839273    6487 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:56:01.839302    6487 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:5d:9e:37:cc:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2
	I0917 10:56:01.840885    6487 main.go:141] libmachine: STDOUT: 
	I0917 10:56:01.840898    6487 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:56:01.840915    6487 client.go:171] duration metric: took 354.185208ms to LocalClient.Create
	I0917 10:56:03.843066    6487 start.go:128] duration metric: took 2.377485042s to createHost
	I0917 10:56:03.843140    6487 start.go:83] releasing machines lock for "newest-cni-929000", held for 2.377629958s
	W0917 10:56:03.843206    6487 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:56:03.852485    6487 out.go:177] * Deleting "newest-cni-929000" in qemu2 ...
	W0917 10:56:03.886343    6487 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:56:03.886374    6487 start.go:729] Will try again in 5 seconds ...
	I0917 10:56:08.888425    6487 start.go:360] acquireMachinesLock for newest-cni-929000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:56:08.888795    6487 start.go:364] duration metric: took 290.875µs to acquireMachinesLock for "newest-cni-929000"
	I0917 10:56:08.888913    6487 start.go:93] Provisioning new machine with config: &{Name:newest-cni-929000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-929000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:56:08.889235    6487 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 10:56:08.893858    6487 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 10:56:08.944168    6487 start.go:159] libmachine.API.Create for "newest-cni-929000" (driver="qemu2")
	I0917 10:56:08.944223    6487 client.go:168] LocalClient.Create starting
	I0917 10:56:08.944353    6487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/ca.pem
	I0917 10:56:08.944421    6487 main.go:141] libmachine: Decoding PEM data...
	I0917 10:56:08.944437    6487 main.go:141] libmachine: Parsing certificate...
	I0917 10:56:08.944499    6487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1312/.minikube/certs/cert.pem
	I0917 10:56:08.944548    6487 main.go:141] libmachine: Decoding PEM data...
	I0917 10:56:08.944569    6487 main.go:141] libmachine: Parsing certificate...
	I0917 10:56:08.945190    6487 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0917 10:56:09.119100    6487 main.go:141] libmachine: Creating SSH key...
	I0917 10:56:09.264007    6487 main.go:141] libmachine: Creating Disk image...
	I0917 10:56:09.264016    6487 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 10:56:09.264190    6487 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2.raw /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2
	I0917 10:56:09.273563    6487 main.go:141] libmachine: STDOUT: 
	I0917 10:56:09.273584    6487 main.go:141] libmachine: STDERR: 
	I0917 10:56:09.273650    6487 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2 +20000M
	I0917 10:56:09.281469    6487 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 10:56:09.281493    6487 main.go:141] libmachine: STDERR: 
	I0917 10:56:09.281508    6487 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2
	I0917 10:56:09.281513    6487 main.go:141] libmachine: Starting QEMU VM...
	I0917 10:56:09.281529    6487 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:56:09.281565    6487 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b5:00:35:ba:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2
	I0917 10:56:09.283204    6487 main.go:141] libmachine: STDOUT: 
	I0917 10:56:09.283217    6487 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:56:09.283230    6487 client.go:171] duration metric: took 339.011208ms to LocalClient.Create
	I0917 10:56:11.285384    6487 start.go:128] duration metric: took 2.396191625s to createHost
	I0917 10:56:11.285434    6487 start.go:83] releasing machines lock for "newest-cni-929000", held for 2.396692s
	W0917 10:56:11.285781    6487 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-929000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-929000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:56:11.296495    6487 out.go:201] 
	W0917 10:56:11.306580    6487 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:56:11.306606    6487 out.go:270] * 
	* 
	W0917 10:56:11.309462    6487 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:56:11.320448    6487 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-929000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-929000 -n newest-cni-929000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-929000 -n newest-cni-929000: exit status 7 (63.953666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-929000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-080000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-080000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (7.26228475s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-080000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-080000" primary control-plane node in "default-k8s-diff-port-080000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-080000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-080000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:56:04.124836    6514 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:56:04.124978    6514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:56:04.124981    6514 out.go:358] Setting ErrFile to fd 2...
	I0917 10:56:04.124984    6514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:56:04.125090    6514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:56:04.126092    6514 out.go:352] Setting JSON to false
	I0917 10:56:04.142247    6514 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5127,"bootTime":1726590637,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:56:04.142315    6514 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:56:04.147539    6514 out.go:177] * [default-k8s-diff-port-080000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:56:04.155451    6514 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:56:04.155494    6514 notify.go:220] Checking for updates...
	I0917 10:56:04.161425    6514 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:56:04.164484    6514 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:56:04.166011    6514 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:56:04.169446    6514 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:56:04.172459    6514 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:56:04.175764    6514 config.go:182] Loaded profile config "default-k8s-diff-port-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:56:04.176037    6514 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:56:04.180375    6514 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:56:04.187460    6514 start.go:297] selected driver: qemu2
	I0917 10:56:04.187468    6514 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:56:04.187535    6514 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:56:04.189955    6514 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:56:04.189977    6514 cni.go:84] Creating CNI manager for ""
	I0917 10:56:04.189997    6514 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:56:04.190029    6514 start.go:340] cluster config:
	{Name:default-k8s-diff-port-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-080000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:56:04.193816    6514 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:56:04.201490    6514 out.go:177] * Starting "default-k8s-diff-port-080000" primary control-plane node in "default-k8s-diff-port-080000" cluster
	I0917 10:56:04.205408    6514 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:56:04.205425    6514 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:56:04.205437    6514 cache.go:56] Caching tarball of preloaded images
	I0917 10:56:04.205509    6514 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:56:04.205522    6514 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:56:04.205582    6514 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/default-k8s-diff-port-080000/config.json ...
	I0917 10:56:04.206077    6514 start.go:360] acquireMachinesLock for default-k8s-diff-port-080000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:56:04.206113    6514 start.go:364] duration metric: took 29µs to acquireMachinesLock for "default-k8s-diff-port-080000"
	I0917 10:56:04.206122    6514 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:56:04.206128    6514 fix.go:54] fixHost starting: 
	I0917 10:56:04.206252    6514 fix.go:112] recreateIfNeeded on default-k8s-diff-port-080000: state=Stopped err=<nil>
	W0917 10:56:04.206261    6514 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:56:04.210475    6514 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-080000" ...
	I0917 10:56:04.218406    6514 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:56:04.218438    6514 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:f0:e8:6b:99:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2
	I0917 10:56:04.220443    6514 main.go:141] libmachine: STDOUT: 
	I0917 10:56:04.220464    6514 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:56:04.220498    6514 fix.go:56] duration metric: took 14.369667ms for fixHost
	I0917 10:56:04.220503    6514 start.go:83] releasing machines lock for "default-k8s-diff-port-080000", held for 14.385667ms
	W0917 10:56:04.220509    6514 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:56:04.220552    6514 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:56:04.220557    6514 start.go:729] Will try again in 5 seconds ...
	I0917 10:56:09.222452    6514 start.go:360] acquireMachinesLock for default-k8s-diff-port-080000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:56:11.285591    6514 start.go:364] duration metric: took 2.063164458s to acquireMachinesLock for "default-k8s-diff-port-080000"
	I0917 10:56:11.285797    6514 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:56:11.285814    6514 fix.go:54] fixHost starting: 
	I0917 10:56:11.286558    6514 fix.go:112] recreateIfNeeded on default-k8s-diff-port-080000: state=Stopped err=<nil>
	W0917 10:56:11.286587    6514 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:56:11.303456    6514 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-080000" ...
	I0917 10:56:11.309399    6514 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:56:11.309636    6514 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:f0:e8:6b:99:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/default-k8s-diff-port-080000/disk.qcow2
	I0917 10:56:11.318865    6514 main.go:141] libmachine: STDOUT: 
	I0917 10:56:11.318924    6514 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:56:11.319014    6514 fix.go:56] duration metric: took 33.1925ms for fixHost
	I0917 10:56:11.319029    6514 start.go:83] releasing machines lock for "default-k8s-diff-port-080000", held for 33.379084ms
	W0917 10:56:11.319197    6514 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-080000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-080000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:56:11.332485    6514 out.go:201] 
	W0917 10:56:11.336359    6514 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:56:11.336403    6514 out.go:270] * 
	* 
	W0917 10:56:11.338478    6514 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:56:11.349439    6514 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-080000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000: exit status 7 (51.817833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-080000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000: exit status 7 (33.76775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-080000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-080000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-080000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.917917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-080000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-080000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000: exit status 7 (37.256417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-080000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000: exit status 7 (29.630375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-080000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-080000 --alsologtostderr -v=1: exit status 83 (42.827208ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-080000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-080000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:56:11.604999    6545 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:56:11.605168    6545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:56:11.605172    6545 out.go:358] Setting ErrFile to fd 2...
	I0917 10:56:11.605174    6545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:56:11.605308    6545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:56:11.605528    6545 out.go:352] Setting JSON to false
	I0917 10:56:11.605535    6545 mustload.go:65] Loading cluster: default-k8s-diff-port-080000
	I0917 10:56:11.605735    6545 config.go:182] Loaded profile config "default-k8s-diff-port-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:56:11.610457    6545 out.go:177] * The control-plane node default-k8s-diff-port-080000 host is not running: state=Stopped
	I0917 10:56:11.614495    6545 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-080000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-080000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000: exit status 7 (29.674084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-080000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000: exit status 7 (29.3665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-080000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-929000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-929000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.183327125s)

                                                
                                                
-- stdout --
	* [newest-cni-929000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-929000" primary control-plane node in "newest-cni-929000" cluster
	* Restarting existing qemu2 VM for "newest-cni-929000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-929000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:56:14.756110    6580 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:56:14.756242    6580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:56:14.756246    6580 out.go:358] Setting ErrFile to fd 2...
	I0917 10:56:14.756249    6580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:56:14.756388    6580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:56:14.757391    6580 out.go:352] Setting JSON to false
	I0917 10:56:14.773384    6580 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5137,"bootTime":1726590637,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:56:14.773450    6580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:56:14.777404    6580 out.go:177] * [newest-cni-929000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:56:14.784277    6580 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:56:14.784318    6580 notify.go:220] Checking for updates...
	I0917 10:56:14.791388    6580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:56:14.793051    6580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:56:14.796398    6580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:56:14.799387    6580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:56:14.802284    6580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:56:14.805651    6580 config.go:182] Loaded profile config "newest-cni-929000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:56:14.805894    6580 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:56:14.810351    6580 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:56:14.815412    6580 start.go:297] selected driver: qemu2
	I0917 10:56:14.815420    6580 start.go:901] validating driver "qemu2" against &{Name:newest-cni-929000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-929000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:56:14.815477    6580 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:56:14.817823    6580 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0917 10:56:14.817847    6580 cni.go:84] Creating CNI manager for ""
	I0917 10:56:14.817867    6580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:56:14.817890    6580 start.go:340] cluster config:
	{Name:newest-cni-929000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-929000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:56:14.821210    6580 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:56:14.828359    6580 out.go:177] * Starting "newest-cni-929000" primary control-plane node in "newest-cni-929000" cluster
	I0917 10:56:14.832399    6580 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:56:14.832415    6580 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 10:56:14.832424    6580 cache.go:56] Caching tarball of preloaded images
	I0917 10:56:14.832485    6580 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 10:56:14.832492    6580 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:56:14.832565    6580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/newest-cni-929000/config.json ...
	I0917 10:56:14.833064    6580 start.go:360] acquireMachinesLock for newest-cni-929000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:56:14.833097    6580 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "newest-cni-929000"
	I0917 10:56:14.833106    6580 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:56:14.833113    6580 fix.go:54] fixHost starting: 
	I0917 10:56:14.833229    6580 fix.go:112] recreateIfNeeded on newest-cni-929000: state=Stopped err=<nil>
	W0917 10:56:14.833238    6580 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:56:14.837387    6580 out.go:177] * Restarting existing qemu2 VM for "newest-cni-929000" ...
	I0917 10:56:14.845379    6580 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:56:14.845413    6580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b5:00:35:ba:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2
	I0917 10:56:14.847348    6580 main.go:141] libmachine: STDOUT: 
	I0917 10:56:14.847369    6580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:56:14.847395    6580 fix.go:56] duration metric: took 14.284458ms for fixHost
	I0917 10:56:14.847400    6580 start.go:83] releasing machines lock for "newest-cni-929000", held for 14.2985ms
	W0917 10:56:14.847406    6580 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:56:14.847434    6580 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:56:14.847439    6580 start.go:729] Will try again in 5 seconds ...
	I0917 10:56:19.849523    6580 start.go:360] acquireMachinesLock for newest-cni-929000: {Name:mkdac3546d596b49233ac92a6f0bc304c3188eec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:56:19.850056    6580 start.go:364] duration metric: took 437.458µs to acquireMachinesLock for "newest-cni-929000"
	I0917 10:56:19.850213    6580 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:56:19.850233    6580 fix.go:54] fixHost starting: 
	I0917 10:56:19.850967    6580 fix.go:112] recreateIfNeeded on newest-cni-929000: state=Stopped err=<nil>
	W0917 10:56:19.850995    6580 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:56:19.861323    6580 out.go:177] * Restarting existing qemu2 VM for "newest-cni-929000" ...
	I0917 10:56:19.865399    6580 qemu.go:418] Using hvf for hardware acceleration
	I0917 10:56:19.865646    6580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b5:00:35:ba:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/newest-cni-929000/disk.qcow2
	I0917 10:56:19.875153    6580 main.go:141] libmachine: STDOUT: 
	I0917 10:56:19.875212    6580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 10:56:19.875291    6580 fix.go:56] duration metric: took 25.0605ms for fixHost
	I0917 10:56:19.875307    6580 start.go:83] releasing machines lock for "newest-cni-929000", held for 25.228708ms
	W0917 10:56:19.875478    6580 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-929000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-929000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 10:56:19.883365    6580 out.go:201] 
	W0917 10:56:19.887392    6580 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 10:56:19.887442    6580 out.go:270] * 
	* 
	W0917 10:56:19.890393    6580 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:56:19.897381    6580 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-929000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-929000 -n newest-cni-929000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-929000 -n newest-cni-929000: exit status 7 (69.959209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-929000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-929000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-929000 -n newest-cni-929000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-929000 -n newest-cni-929000: exit status 7 (30.828208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-929000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-929000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-929000 --alsologtostderr -v=1: exit status 83 (42.621625ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-929000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-929000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:56:20.082000    6594 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:56:20.082157    6594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:56:20.082160    6594 out.go:358] Setting ErrFile to fd 2...
	I0917 10:56:20.082163    6594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:56:20.082299    6594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:56:20.082517    6594 out.go:352] Setting JSON to false
	I0917 10:56:20.082525    6594 mustload.go:65] Loading cluster: newest-cni-929000
	I0917 10:56:20.082753    6594 config.go:182] Loaded profile config "newest-cni-929000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:56:20.086719    6594 out.go:177] * The control-plane node newest-cni-929000 host is not running: state=Stopped
	I0917 10:56:20.090875    6594 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-929000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-929000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-929000 -n newest-cni-929000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-929000 -n newest-cni-929000: exit status 7 (30.711375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-929000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-929000 -n newest-cni-929000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-929000 -n newest-cni-929000: exit status 7 (29.935333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-929000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (155/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 11.41
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 200.38
29 TestAddons/serial/Volcano 37.41
31 TestAddons/serial/GCPAuth/Namespaces 0.09
34 TestAddons/parallel/Ingress 17.57
35 TestAddons/parallel/InspektorGadget 10.29
36 TestAddons/parallel/MetricsServer 5.28
39 TestAddons/parallel/CSI 44.49
40 TestAddons/parallel/Headlamp 17.64
41 TestAddons/parallel/CloudSpanner 5.22
42 TestAddons/parallel/LocalPath 41.94
43 TestAddons/parallel/NvidiaDevicePlugin 5.16
44 TestAddons/parallel/Yakd 10.28
45 TestAddons/StoppedEnableDisable 12.39
53 TestHyperKitDriverInstallOrUpdate 10.88
56 TestErrorSpam/setup 34.57
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.25
59 TestErrorSpam/pause 0.7
60 TestErrorSpam/unpause 0.63
61 TestErrorSpam/stop 55.29
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 47.59
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 35.92
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.69
73 TestFunctional/serial/CacheCmd/cache/add_local 1.37
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.85
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.01
81 TestFunctional/serial/ExtraConfig 39.27
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.67
84 TestFunctional/serial/LogsFileCmd 0.7
85 TestFunctional/serial/InvalidService 4.15
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 9.71
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.24
96 TestFunctional/parallel/AddonsCmd 0.19
97 TestFunctional/parallel/PersistentVolumeClaim 23.99
99 TestFunctional/parallel/SSHCmd 0.12
100 TestFunctional/parallel/CpCmd 0.42
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.41
107 TestFunctional/parallel/NodeLabels 0.05
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.1
111 TestFunctional/parallel/License 0.22
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.06
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.09
124 TestFunctional/parallel/ServiceCmd/List 0.3
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
127 TestFunctional/parallel/ServiceCmd/Format 0.1
128 TestFunctional/parallel/ServiceCmd/URL 0.1
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
130 TestFunctional/parallel/ProfileCmd/profile_list 0.12
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
132 TestFunctional/parallel/MountCmd/any-port 5.43
133 TestFunctional/parallel/MountCmd/specific-port 1.9
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.52
135 TestFunctional/parallel/Version/short 0.05
136 TestFunctional/parallel/Version/components 0.2
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.59
141 TestFunctional/parallel/ImageCommands/ImageBuild 1.89
142 TestFunctional/parallel/ImageCommands/Setup 1.94
143 TestFunctional/parallel/DockerEnv/bash 0.27
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.39
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 178.74
161 TestMultiControlPlane/serial/DeployApp 8.04
162 TestMultiControlPlane/serial/PingHostFromPods 0.72
163 TestMultiControlPlane/serial/AddWorkerNode 89.48
164 TestMultiControlPlane/serial/NodeLabels 0.14
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.25
166 TestMultiControlPlane/serial/CopyFile 4.39
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 79.26
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 3.43
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 2.22
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.3
277 TestNoKubernetes/serial/Stop 2.02
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.63
294 TestStartStop/group/old-k8s-version/serial/Stop 3.39
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
307 TestStartStop/group/no-preload/serial/Stop 2.71
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
316 TestStartStop/group/embed-certs/serial/Stop 3.11
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.92
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
336 TestStartStop/group/newest-cni/serial/Stop 3.13
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-345000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-345000: exit status 85 (97.553875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-345000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |          |
	|         | -p download-only-345000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 09:55:21
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 09:55:21.167437    1842 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:55:21.167589    1842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:55:21.167592    1842 out.go:358] Setting ErrFile to fd 2...
	I0917 09:55:21.167594    1842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:55:21.167741    1842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	W0917 09:55:21.167839    1842 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19662-1312/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19662-1312/.minikube/config/config.json: no such file or directory
	I0917 09:55:21.169091    1842 out.go:352] Setting JSON to true
	I0917 09:55:21.186649    1842 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1484,"bootTime":1726590637,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 09:55:21.186718    1842 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 09:55:21.192043    1842 out.go:97] [download-only-345000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 09:55:21.192202    1842 notify.go:220] Checking for updates...
	W0917 09:55:21.192245    1842 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 09:55:21.195818    1842 out.go:169] MINIKUBE_LOCATION=19662
	I0917 09:55:21.202066    1842 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 09:55:21.207062    1842 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 09:55:21.211009    1842 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 09:55:21.213992    1842 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	W0917 09:55:21.219900    1842 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 09:55:21.220089    1842 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 09:55:21.225074    1842 out.go:97] Using the qemu2 driver based on user configuration
	I0917 09:55:21.225095    1842 start.go:297] selected driver: qemu2
	I0917 09:55:21.225111    1842 start.go:901] validating driver "qemu2" against <nil>
	I0917 09:55:21.225206    1842 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 09:55:21.229024    1842 out.go:169] Automatically selected the socket_vmnet network
	I0917 09:55:21.234522    1842 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0917 09:55:21.234608    1842 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 09:55:21.234634    1842 cni.go:84] Creating CNI manager for ""
	I0917 09:55:21.234667    1842 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 09:55:21.234718    1842 start.go:340] cluster config:
	{Name:download-only-345000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-345000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:55:21.239852    1842 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 09:55:21.244016    1842 out.go:97] Downloading VM boot image ...
	I0917 09:55:21.244031    1842 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso
	I0917 09:55:26.768430    1842 out.go:97] Starting "download-only-345000" primary control-plane node in "download-only-345000" cluster
	I0917 09:55:26.768452    1842 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 09:55:26.825947    1842 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 09:55:26.825955    1842 cache.go:56] Caching tarball of preloaded images
	I0917 09:55:26.826173    1842 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 09:55:26.831233    1842 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0917 09:55:26.831239    1842 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 09:55:26.914161    1842 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 09:55:32.181480    1842 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 09:55:32.181643    1842 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 09:55:32.884923    1842 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 09:55:32.885120    1842 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/download-only-345000/config.json ...
	I0917 09:55:32.885136    1842 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/download-only-345000/config.json: {Name:mkd7327fcf68477decfb54ee13291a63ff74676c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:55:32.885393    1842 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 09:55:32.885589    1842 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0917 09:55:33.419858    1842 out.go:193] 
	W0917 09:55:33.426918    1842 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19662-1312/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106c0d780 0x106c0d780 0x106c0d780 0x106c0d780 0x106c0d780 0x106c0d780 0x106c0d780] Decompressors:map[bz2:0x140005afd10 gz:0x140005afd18 tar:0x140005afca0 tar.bz2:0x140005afcd0 tar.gz:0x140005afce0 tar.xz:0x140005afcf0 tar.zst:0x140005afd00 tbz2:0x140005afcd0 tgz:0x140005afce0 txz:0x140005afcf0 tzst:0x140005afd00 xz:0x140005afd20 zip:0x140005afd30 zst:0x140005afd28] Getters:map[file:0x14001422550 http:0x140005f20a0 https:0x140005f2190] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0917 09:55:33.426940    1842 out_reason.go:110] 
	W0917 09:55:33.434854    1842 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 09:55:33.438754    1842 out.go:193] 
	
	
	* The control-plane node download-only-345000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-345000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-345000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (11.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-470000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-470000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (11.405964333s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (11.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-470000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-470000: exit status 85 (82.066708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-345000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |                     |
	|         | -p download-only-345000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| delete  | -p download-only-345000        | download-only-345000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| start   | -o=json --download-only        | download-only-470000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |                     |
	|         | -p download-only-470000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 09:55:33
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 09:55:33.850900    1866 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:55:33.851041    1866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:55:33.851045    1866 out.go:358] Setting ErrFile to fd 2...
	I0917 09:55:33.851047    1866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:55:33.851186    1866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 09:55:33.852237    1866 out.go:352] Setting JSON to true
	I0917 09:55:33.868456    1866 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1496,"bootTime":1726590637,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 09:55:33.868537    1866 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 09:55:33.872838    1866 out.go:97] [download-only-470000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 09:55:33.872966    1866 notify.go:220] Checking for updates...
	I0917 09:55:33.875709    1866 out.go:169] MINIKUBE_LOCATION=19662
	I0917 09:55:33.878769    1866 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 09:55:33.882827    1866 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 09:55:33.885786    1866 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 09:55:33.888756    1866 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	W0917 09:55:33.893092    1866 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 09:55:33.893233    1866 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 09:55:33.896716    1866 out.go:97] Using the qemu2 driver based on user configuration
	I0917 09:55:33.896726    1866 start.go:297] selected driver: qemu2
	I0917 09:55:33.896729    1866 start.go:901] validating driver "qemu2" against <nil>
	I0917 09:55:33.896774    1866 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 09:55:33.899782    1866 out.go:169] Automatically selected the socket_vmnet network
	I0917 09:55:33.904796    1866 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0917 09:55:33.904877    1866 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 09:55:33.904895    1866 cni.go:84] Creating CNI manager for ""
	I0917 09:55:33.904919    1866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 09:55:33.904930    1866 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 09:55:33.904974    1866 start.go:340] cluster config:
	{Name:download-only-470000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-470000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:55:33.908249    1866 iso.go:125] acquiring lock: {Name:mkca66fb309119a853583b80a7cdd08bbea34680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 09:55:33.911755    1866 out.go:97] Starting "download-only-470000" primary control-plane node in "download-only-470000" cluster
	I0917 09:55:33.911762    1866 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 09:55:33.976982    1866 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 09:55:33.977012    1866 cache.go:56] Caching tarball of preloaded images
	I0917 09:55:33.977220    1866 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 09:55:33.982462    1866 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0917 09:55:33.982471    1866 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0917 09:55:34.101814    1866 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 09:55:43.010242    1866 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0917 09:55:43.010412    1866 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0917 09:55:43.533899    1866 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 09:55:43.534110    1866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/download-only-470000/config.json ...
	I0917 09:55:43.534132    1866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/download-only-470000/config.json: {Name:mk21ea13ff35cdc8ea5a64c9c9823d0c81f30d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:55:43.534531    1866 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 09:55:43.534659    1866 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19662-1312/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-470000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-470000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-470000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-006000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-006000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-006000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-439000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-439000: exit status 85 (56.421792ms)

                                                
                                                
-- stdout --
	* Profile "addons-439000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-439000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-439000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-439000: exit status 85 (56.347709ms)

                                                
                                                
-- stdout --
	* Profile "addons-439000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-439000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (200.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-439000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-439000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m20.377230125s)
--- PASS: TestAddons/Setup (200.38s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.41s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.544958ms
addons_test.go:897: volcano-scheduler stabilized in 7.586583ms
addons_test.go:905: volcano-admission stabilized in 7.595792ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-59vn4" [56fe7359-cf77-476b-8cd0-e46013a83533] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004988167s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-vb76l" [f37304a9-4c59-4598-b10d-df2bff3cb402] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.007032334s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-7m7wj" [672cfb1f-c8ec-4550-a1d4-3274d1d89933] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.008953917s
addons_test.go:932: (dbg) Run:  kubectl --context addons-439000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-439000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-439000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e9f75555-f779-4dfb-9d85-d6d12e82ac5b] Pending
helpers_test.go:344: "test-job-nginx-0" [e9f75555-f779-4dfb-9d85-d6d12e82ac5b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [e9f75555-f779-4dfb-9d85-d6d12e82ac5b] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.006544791s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-439000 addons disable volcano --alsologtostderr -v=1: (10.13978025s)
--- PASS: TestAddons/serial/Volcano (37.41s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-439000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-439000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-439000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-439000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-439000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [76e12dc3-f5a1-440d-a75d-c6acf96fffdd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [76e12dc3-f5a1-440d-a75d-c6acf96fffdd] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.009464208s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-439000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-439000 addons disable ingress --alsologtostderr -v=1: (7.292171125s)
--- PASS: TestAddons/parallel/Ingress (17.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9kf7p" [1a22ff28-1b0e-4ccd-822a-f6b707202278] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011932417s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-439000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-439000: (5.281592542s)
--- PASS: TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.215708ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4dqp2" [bbed7153-e5e6-4bce-9f35-b76c294d2683] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007616541s
addons_test.go:417: (dbg) Run:  kubectl --context addons-439000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.618083ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-439000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-439000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ff64c8c7-0dd7-4c88-bfd2-f48dd897d9be] Pending
helpers_test.go:344: "task-pv-pod" [ff64c8c7-0dd7-4c88-bfd2-f48dd897d9be] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ff64c8c7-0dd7-4c88-bfd2-f48dd897d9be] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004620833s
addons_test.go:590: (dbg) Run:  kubectl --context addons-439000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-439000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-439000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-439000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-439000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-439000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-439000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5e094cd5-843d-46e9-b750-80989b1c9565] Pending
helpers_test.go:344: "task-pv-pod-restore" [5e094cd5-843d-46e9-b750-80989b1c9565] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5e094cd5-843d-46e9-b750-80989b1c9565] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008666083s
addons_test.go:632: (dbg) Run:  kubectl --context addons-439000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-439000 delete pod task-pv-pod-restore: (1.155777834s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-439000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-439000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-439000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.099765292s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-439000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-xtxd8" [1c21c05f-b9c3-43fa-96d7-66e36678ea59] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-xtxd8" [1c21c05f-b9c3-43fa-96d7-66e36678ea59] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.010941083s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-439000 addons disable headlamp --alsologtostderr -v=1: (5.288137541s)
--- PASS: TestAddons/parallel/Headlamp (17.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-zmtnt" [2bc75a87-b2fd-4719-b02b-6b2a1833d3bc] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.013838041s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-439000
--- PASS: TestAddons/parallel/CloudSpanner (5.22s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (41.94s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-439000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-439000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a6f035ef-7b4a-4d0c-b667-73e638e11e39] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a6f035ef-7b4a-4d0c-b667-73e638e11e39] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a6f035ef-7b4a-4d0c-b667-73e638e11e39] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.012664s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 ssh "cat /opt/local-path-provisioner/pvc-8dcb0074-8413-48d9-8194-21b51dc8cdfd_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-439000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-439000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-439000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.447542083s)
--- PASS: TestAddons/parallel/LocalPath (41.94s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nms9k" [632af194-297b-4de8-a9f0-5ef6eb83279f] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004843625s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-439000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-fgfrb" [c11aa346-8240-4d4d-9ab3-1d54373bc75c] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.007204s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-439000 addons disable yakd --alsologtostderr -v=1: (5.267690667s)
--- PASS: TestAddons/parallel/Yakd (10.28s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-439000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-439000: (12.20499975s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-439000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-439000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-439000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.88s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.88s)

                                                
                                    
x
+
TestErrorSpam/setup (34.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-661000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-661000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 --driver=qemu2 : (34.5658665s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (34.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 pause
--- PASS: TestErrorSpam/pause (0.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (55.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 stop: (3.191989666s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 stop: (26.068303084s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-661000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-661000 stop: (26.031384s)
--- PASS: TestErrorSpam/stop (55.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19662-1312/.minikube/files/etc/test/nested/copy/1840/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-334000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-334000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.585565208s)
--- PASS: TestFunctional/serial/StartWithProxy (47.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-334000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-334000 --alsologtostderr -v=8: (35.91696025s)
functional_test.go:663: soft start took 35.917346875s for "functional-334000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-334000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-334000 cache add registry.k8s.io/pause:3.1: (1.0240725s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1225577395/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cache add minikube-local-cache-test:functional-334000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-334000 cache add minikube-local-cache-test:functional-334000: (1.029063333s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cache delete minikube-local-cache-test:functional-334000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-334000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (68.801833ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 kubectl -- --context functional-334000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.85s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-334000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-334000 get pods: (1.007985875s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-334000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-334000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.272581459s)
functional_test.go:761: restart took 39.272672458s for "functional-334000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-334000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1699351137/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-334000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-334000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-334000: exit status 115 (136.477667ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30420 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-334000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 config get cpus: exit status 14 (32.971792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 config get cpus: exit status 14 (31.069584ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-334000 --alsologtostderr -v=1]
E0917 10:14:27.011516    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-334000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2941: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.71s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-334000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-334000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (123.724167ms)

                                                
                                                
-- stdout --
	* [functional-334000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:14:23.902170    2928 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:14:23.902289    2928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:14:23.902293    2928 out.go:358] Setting ErrFile to fd 2...
	I0917 10:14:23.902295    2928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:14:23.902432    2928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:14:23.903540    2928 out.go:352] Setting JSON to false
	I0917 10:14:23.919426    2928 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2626,"bootTime":1726590637,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:14:23.919499    2928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:14:23.925063    2928 out.go:177] * [functional-334000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 10:14:23.931982    2928 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:14:23.932028    2928 notify.go:220] Checking for updates...
	I0917 10:14:23.940026    2928 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:14:23.943952    2928 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:14:23.947020    2928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:14:23.954000    2928 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:14:23.962038    2928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:14:23.966252    2928 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:14:23.966510    2928 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:14:23.970982    2928 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 10:14:23.975905    2928 start.go:297] selected driver: qemu2
	I0917 10:14:23.975911    2928 start.go:901] validating driver "qemu2" against &{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:14:23.975968    2928 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:14:23.981037    2928 out.go:201] 
	W0917 10:14:23.985056    2928 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 10:14:23.992978    2928 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-334000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-334000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-334000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (125.571125ms)

                                                
                                                
-- stdout --
	* [functional-334000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:14:23.771578    2924 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:14:23.771690    2924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:14:23.771695    2924 out.go:358] Setting ErrFile to fd 2...
	I0917 10:14:23.771697    2924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:14:23.771826    2924 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
	I0917 10:14:23.773099    2924 out.go:352] Setting JSON to false
	I0917 10:14:23.791200    2924 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2626,"bootTime":1726590637,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0917 10:14:23.791288    2924 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:14:23.795963    2924 out.go:177] * [functional-334000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0917 10:14:23.803042    2924 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:14:23.803097    2924 notify.go:220] Checking for updates...
	I0917 10:14:23.810037    2924 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	I0917 10:14:23.818027    2924 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 10:14:23.825868    2924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:14:23.833036    2924 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	I0917 10:14:23.837031    2924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:14:23.840278    2924 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:14:23.840535    2924 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:14:23.844981    2924 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0917 10:14:23.852021    2924 start.go:297] selected driver: qemu2
	I0917 10:14:23.852026    2924 start.go:901] validating driver "qemu2" against &{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:14:23.852071    2924 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:14:23.857906    2924 out.go:201] 
	W0917 10:14:23.861992    2924 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 10:14:23.869957    2924 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4c01e7fc-e00f-45f0-b5f1-8908e075e53d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013809042s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-334000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-334000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-334000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-334000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3cba7bab-3639-483b-a6e3-ac3248771bff] Pending
helpers_test.go:344: "sp-pod" [3cba7bab-3639-483b-a6e3-ac3248771bff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3cba7bab-3639-483b-a6e3-ac3248771bff] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.008059583s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-334000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-334000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-334000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dafc33fb-9b5a-4626-a22f-dce81efface5] Pending
helpers_test.go:344: "sp-pod" [dafc33fb-9b5a-4626-a22f-dce81efface5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dafc33fb-9b5a-4626-a22f-dce81efface5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010182417s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-334000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh -n functional-334000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cp functional-334000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd330972954/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh -n functional-334000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh -n functional-334000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1840/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/test/nested/copy/1840/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1840.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/1840.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1840.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /usr/share/ca-certificates/1840.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/18402.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/18402.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/18402.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /usr/share/ca-certificates/18402.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-334000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo systemctl is-active crio": exit status 1 (95.985584ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2780: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-334000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0985fd8d-2e8f-4d7a-9ebd-ddf08bfbadca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0985fd8d-2e8f-4d7a-9ebd-ddf08bfbadca] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003652416s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-334000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.139.47 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-334000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-334000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-334000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-sjwwb" [e7274415-db5f-4307-8b31-cef7ad13570e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0917 10:14:06.495014    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:14:06.503234    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:14:06.516641    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:14:06.540004    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:14:06.583359    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:14:06.666778    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:14:06.830189    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:14:07.153607    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-64b4f8f9ff-sjwwb" [e7274415-db5f-4307-8b31-cef7ad13570e] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0917 10:14:07.797259    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:14:09.081155    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005907958s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 service list -o json
functional_test.go:1494: Took "282.902291ms" to run "out/minikube-darwin-arm64 -p functional-334000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31467
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31467
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "82.881125ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.49975ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "82.280083ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "34.437ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port51347177/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726593254664749000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port51347177/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726593254664749000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port51347177/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726593254664749000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port51347177/001/test-1726593254664749000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (55.414625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 17:14 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 17:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 17:14 test-1726593254664749000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh cat /mount-9p/test-1726593254664749000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-334000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [853a7577-92f7-4f8d-8518-71d57a9f6690] Pending
helpers_test.go:344: "busybox-mount" [853a7577-92f7-4f8d-8518-71d57a9f6690] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0917 10:14:16.768263    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [853a7577-92f7-4f8d-8518-71d57a9f6690] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [853a7577-92f7-4f8d-8518-71d57a9f6690] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003817833s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-334000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port51347177/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3161448144/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Done: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": (1.546457375s)
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3161448144/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "sudo umount -f /mount-9p": exit status 1 (57.529625ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-334000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3161448144/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2189418960/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2189418960/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2189418960/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1: exit status 1 (64.986792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1: exit status 1 (86.34125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-334000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2189418960/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2189418960/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-334000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2189418960/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-334000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-334000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-334000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-334000 image ls --format short --alsologtostderr:
I0917 10:14:37.406493    3086 out.go:345] Setting OutFile to fd 1 ...
I0917 10:14:37.406660    3086 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:14:37.406664    3086 out.go:358] Setting ErrFile to fd 2...
I0917 10:14:37.406666    3086 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:14:37.406802    3086 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
I0917 10:14:37.407276    3086 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:14:37.407352    3086 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:14:37.408342    3086 ssh_runner.go:195] Run: systemctl --version
I0917 10:14:37.408351    3086 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/functional-334000/id_rsa Username:docker}
I0917 10:14:37.433143    3086 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-334000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kicbase/echo-server               | functional-334000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| docker.io/library/minikube-local-cache-test | functional-334000 | b8c595b750a26 | 30B    |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-334000 image ls --format table --alsologtostderr:
I0917 10:14:38.065808    3100 out.go:345] Setting OutFile to fd 1 ...
I0917 10:14:38.066044    3100 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:14:38.066050    3100 out.go:358] Setting ErrFile to fd 2...
I0917 10:14:38.066053    3100 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:14:38.066194    3100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
I0917 10:14:38.066622    3100 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:14:38.066683    3100 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:14:38.067685    3100 ssh_runner.go:195] Run: systemctl --version
I0917 10:14:38.067694    3100 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/functional-334000/id_rsa Username:docker}
I0917 10:14:38.091663    3100 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-334000 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"b8c595b750a264ee8364786c95579e9e24f5b37a8a66dddd72582ea5d216f0a1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-334000"],"size":"30"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-334000"],"siz
e":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e
4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"si
ze":"66000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-334000 image ls --format json --alsologtostderr:
I0917 10:14:37.991541    3097 out.go:345] Setting OutFile to fd 1 ...
I0917 10:14:37.991702    3097 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:14:37.991705    3097 out.go:358] Setting ErrFile to fd 2...
I0917 10:14:37.991708    3097 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:14:37.991836    3097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
I0917 10:14:37.992238    3097 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:14:37.992307    3097 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:14:37.993186    3097 ssh_runner.go:195] Run: systemctl --version
I0917 10:14:37.993194    3097 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/functional-334000/id_rsa Username:docker}
I0917 10:14:38.016956    3097 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-334000 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-334000
size: "4780000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: b8c595b750a264ee8364786c95579e9e24f5b37a8a66dddd72582ea5d216f0a1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-334000
size: "30"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-334000 image ls --format yaml --alsologtostderr:
I0917 10:14:37.406502    3087 out.go:345] Setting OutFile to fd 1 ...
I0917 10:14:37.406662    3087 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:14:37.406665    3087 out.go:358] Setting ErrFile to fd 2...
I0917 10:14:37.406667    3087 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:14:37.406852    3087 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
I0917 10:14:37.407338    3087 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:14:37.407410    3087 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:14:37.407686    3087 retry.go:31] will retry after 501.524074ms: connect: dial unix /Users/jenkins/minikube-integration/19662-1312/.minikube/machines/functional-334000/monitor: connect: connection refused
I0917 10:14:37.911684    3087 ssh_runner.go:195] Run: systemctl --version
I0917 10:14:37.911704    3087 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/functional-334000/id_rsa Username:docker}
I0917 10:14:37.943334    3087 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-334000 ssh pgrep buildkitd: exit status 1 (58.705ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image build -t localhost/my-image:functional-334000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-334000 image build -t localhost/my-image:functional-334000 testdata/build --alsologtostderr: (1.7544065s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-334000 image build -t localhost/my-image:functional-334000 testdata/build --alsologtostderr:
I0917 10:14:37.537037    3094 out.go:345] Setting OutFile to fd 1 ...
I0917 10:14:37.537241    3094 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:14:37.537244    3094 out.go:358] Setting ErrFile to fd 2...
I0917 10:14:37.537247    3094 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:14:37.537386    3094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1312/.minikube/bin
I0917 10:14:37.537789    3094 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:14:37.538582    3094 config.go:182] Loaded profile config "functional-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:14:37.539423    3094 ssh_runner.go:195] Run: systemctl --version
I0917 10:14:37.539431    3094 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1312/.minikube/machines/functional-334000/id_rsa Username:docker}
I0917 10:14:37.561793    3094 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3801460237.tar
I0917 10:14:37.561848    3094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 10:14:37.565253    3094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3801460237.tar
I0917 10:14:37.566778    3094 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3801460237.tar: stat -c "%s %y" /var/lib/minikube/build/build.3801460237.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3801460237.tar': No such file or directory
I0917 10:14:37.566797    3094 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3801460237.tar --> /var/lib/minikube/build/build.3801460237.tar (3072 bytes)
I0917 10:14:37.575418    3094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3801460237
I0917 10:14:37.579195    3094 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3801460237 -xf /var/lib/minikube/build/build.3801460237.tar
I0917 10:14:37.582927    3094 docker.go:360] Building image: /var/lib/minikube/build/build.3801460237
I0917 10:14:37.582978    3094 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-334000 /var/lib/minikube/build/build.3801460237
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:3a76f4f95f3933f9b0bfa4fedaee5061c159023d6a71f581d31b913a0d98ed0f done
#8 naming to localhost/my-image:functional-334000 done
#8 DONE 0.0s
I0917 10:14:39.188024    3094 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-334000 /var/lib/minikube/build/build.3801460237: (1.605066083s)
I0917 10:14:39.188095    3094 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3801460237
I0917 10:14:39.192112    3094 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3801460237.tar
I0917 10:14:39.195257    3094 build_images.go:217] Built localhost/my-image:functional-334000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3801460237.tar
I0917 10:14:39.195275    3094 build_images.go:133] succeeded building to: functional-334000
I0917 10:14:39.195280    3094 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/09/17 10:14:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.924575042s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-334000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-334000 docker-env) && out/minikube-darwin-arm64 status -p functional-334000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-334000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image load --daemon kicbase/echo-server:functional-334000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image load --daemon kicbase/echo-server:functional-334000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-334000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image load --daemon kicbase/echo-server:functional-334000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image save kicbase/echo-server:functional-334000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image rm kicbase/echo-server:functional-334000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-334000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-334000 image save --daemon kicbase/echo-server:functional-334000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-334000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-334000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-334000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-334000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (178.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-468000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0917 10:14:47.494655    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:15:28.457296    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:16:50.379209    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-468000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m58.538976458s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (178.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-468000 -- rollout status deployment/busybox: (6.518592167s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-5zqsh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-92l4h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-hzt85 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-5zqsh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-92l4h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-hzt85 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-5zqsh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-92l4h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-hzt85 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-5zqsh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-5zqsh -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-92l4h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-92l4h -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-hzt85 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-468000 -- exec busybox-7dff88458-hzt85 -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (89.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-468000 -v=7 --alsologtostderr
E0917 10:18:42.334054    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:18:42.341702    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:18:42.355111    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:18:42.376884    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:18:42.420231    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:18:42.503331    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:18:42.666730    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:18:42.990092    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:18:43.632475    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:18:44.915792    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:18:47.479213    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:18:52.600643    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:19:02.843962    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:19:06.489184    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-468000 -v=7 --alsologtostderr: (1m29.242334125s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (89.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-468000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp testdata/cp-test.txt ha-468000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3603484123/001/cp-test_ha-468000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000:/home/docker/cp-test.txt ha-468000-m02:/home/docker/cp-test_ha-468000_ha-468000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m02 "sudo cat /home/docker/cp-test_ha-468000_ha-468000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000:/home/docker/cp-test.txt ha-468000-m03:/home/docker/cp-test_ha-468000_ha-468000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m03 "sudo cat /home/docker/cp-test_ha-468000_ha-468000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000:/home/docker/cp-test.txt ha-468000-m04:/home/docker/cp-test_ha-468000_ha-468000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m04 "sudo cat /home/docker/cp-test_ha-468000_ha-468000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp testdata/cp-test.txt ha-468000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3603484123/001/cp-test_ha-468000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000-m02:/home/docker/cp-test.txt ha-468000:/home/docker/cp-test_ha-468000-m02_ha-468000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000 "sudo cat /home/docker/cp-test_ha-468000-m02_ha-468000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000-m02:/home/docker/cp-test.txt ha-468000-m03:/home/docker/cp-test_ha-468000-m02_ha-468000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m03 "sudo cat /home/docker/cp-test_ha-468000-m02_ha-468000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000-m02:/home/docker/cp-test.txt ha-468000-m04:/home/docker/cp-test_ha-468000-m02_ha-468000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m04 "sudo cat /home/docker/cp-test_ha-468000-m02_ha-468000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp testdata/cp-test.txt ha-468000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3603484123/001/cp-test_ha-468000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000-m03:/home/docker/cp-test.txt ha-468000:/home/docker/cp-test_ha-468000-m03_ha-468000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000 "sudo cat /home/docker/cp-test_ha-468000-m03_ha-468000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000-m03:/home/docker/cp-test.txt ha-468000-m02:/home/docker/cp-test_ha-468000-m03_ha-468000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m02 "sudo cat /home/docker/cp-test_ha-468000-m03_ha-468000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000-m03:/home/docker/cp-test.txt ha-468000-m04:/home/docker/cp-test_ha-468000-m03_ha-468000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m04 "sudo cat /home/docker/cp-test_ha-468000-m03_ha-468000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp testdata/cp-test.txt ha-468000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3603484123/001/cp-test_ha-468000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000-m04:/home/docker/cp-test.txt ha-468000:/home/docker/cp-test_ha-468000-m04_ha-468000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000 "sudo cat /home/docker/cp-test_ha-468000-m04_ha-468000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000-m04:/home/docker/cp-test.txt ha-468000-m02:/home/docker/cp-test_ha-468000-m04_ha-468000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m02 "sudo cat /home/docker/cp-test_ha-468000-m04_ha-468000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 cp ha-468000-m04:/home/docker/cp-test.txt ha-468000-m03:/home/docker/cp-test_ha-468000-m04_ha-468000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-468000 ssh -n ha-468000-m03 "sudo cat /home/docker/cp-test_ha-468000-m04_ha-468000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0917 10:28:42.320640    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/functional-334000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:29:06.477363    1840 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1312/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m19.260871667s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-843000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-843000 --output=json --user=testUser: (3.433811541s)
--- PASS: TestJSONOutput/stop/Command (3.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-833000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-833000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.908375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bd34ebaf-1db4-4727-bf5e-cb8c4a1f5c2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-833000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d4e7bb9a-e582-427b-b817-4f73d20003f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"5d856227-2c0e-4141-80b0-18a3a8a7b90e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig"}}
	{"specversion":"1.0","id":"d2fa6db1-f4bf-4c92-941f-f595bb9eb2e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"672243ef-28ca-4d14-bc54-9b42a7442ff4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fb8a9087-1b6c-47b9-a3d5-3c3a9f80e2c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube"}}
	{"specversion":"1.0","id":"78d01ace-5131-4394-993a-efe3390fc2e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"73f99486-49e7-42fe-b16a-035b2a2de06a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-833000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-833000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-498000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-498000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.616ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-498000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1312/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1312/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-498000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-498000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.596334ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-498000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-498000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.581825958s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.722780458s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-498000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-498000: (2.018341792s)
--- PASS: TestNoKubernetes/serial/Stop (2.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-498000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-498000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.030792ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-498000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-498000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-293000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-842000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-842000 --alsologtostderr -v=3: (3.3930805s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-842000 -n old-k8s-version-842000: exit status 7 (31.329292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-842000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-761000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-761000 --alsologtostderr -v=3: (2.711754875s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-761000 -n no-preload-761000: exit status 7 (56.886958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-761000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-238000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-238000 --alsologtostderr -v=3: (3.109015417s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-238000 -n embed-certs-238000: exit status 7 (52.659416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-238000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-080000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-080000 --alsologtostderr -v=3: (2.91828975s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-080000 -n default-k8s-diff-port-080000: exit status 7 (57.990458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-080000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-929000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-929000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-929000 --alsologtostderr -v=3: (3.133343584s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-929000 -n newest-cni-929000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-929000 -n newest-cni-929000: exit status 7 (62.328875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-929000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-344000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-344000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-344000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-344000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-344000"

                                                
                                                
----------------------- debugLogs end: cilium-344000 [took: 2.197395083s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-344000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-344000
--- SKIP: TestNetworkPlugins/group/cilium (2.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-388000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-388000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard