Test Report: QEMU_macOS 19664

                    
                      b0eadc949d6b6708e1f550519f8385f72d7fe4f5:2024-09-19:36285
                    
                

Test fail (99/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.58
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.26
22 TestOffline 9.99
33 TestAddons/parallel/Registry 71.34
46 TestCertOptions 10.09
47 TestCertExpiration 195.32
48 TestDockerFlags 10.11
49 TestForceSystemdFlag 10.12
50 TestForceSystemdEnv 11.46
95 TestFunctional/parallel/ServiceCmdConnect 36.66
167 TestMultiControlPlane/serial/StopSecondaryNode 64.13
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 51.94
169 TestMultiControlPlane/serial/RestartSecondaryNode 83.03
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.37
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 202.08
175 TestMultiControlPlane/serial/RestartCluster 5.25
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.07
184 TestJSONOutput/start/Command 10.04
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.04
213 TestMinikubeProfile 10.13
216 TestMountStart/serial/StartWithMountFirst 10.13
219 TestMultiNode/serial/FreshStart2Nodes 9.96
220 TestMultiNode/serial/DeployApp2Nodes 119.24
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.08
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 52.96
228 TestMultiNode/serial/RestartKeepsNodes 8.58
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 3.07
231 TestMultiNode/serial/RestartMultiNode 5.24
232 TestMultiNode/serial/ValidateNameConflict 20.35
236 TestPreload 10.14
238 TestScheduledStopUnix 10.01
239 TestSkaffold 12.45
242 TestRunningBinaryUpgrade 590.48
244 TestKubernetesUpgrade 18.61
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.39
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.21
260 TestStoppedBinaryUpgrade/Upgrade 575.59
262 TestPause/serial/Start 10.06
272 TestNoKubernetes/serial/StartWithK8s 9.92
273 TestNoKubernetes/serial/StartWithStopK8s 5.31
274 TestNoKubernetes/serial/Start 5.32
278 TestNoKubernetes/serial/StartNoArgs 5.35
280 TestNetworkPlugins/group/auto/Start 9.9
281 TestNetworkPlugins/group/kindnet/Start 9.81
282 TestNetworkPlugins/group/flannel/Start 9.77
283 TestNetworkPlugins/group/enable-default-cni/Start 9.97
284 TestNetworkPlugins/group/bridge/Start 9.72
285 TestNetworkPlugins/group/kubenet/Start 9.83
286 TestNetworkPlugins/group/custom-flannel/Start 9.82
287 TestNetworkPlugins/group/calico/Start 9.9
288 TestNetworkPlugins/group/false/Start 9.93
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.99
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.11
302 TestStartStop/group/no-preload/serial/FirstStart 10.15
303 TestStartStop/group/no-preload/serial/DeployApp 0.09
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
307 TestStartStop/group/no-preload/serial/SecondStart 7.39
309 TestStartStop/group/embed-certs/serial/FirstStart 10
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
313 TestStartStop/group/no-preload/serial/Pause 0.11
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.2
316 TestStartStop/group/embed-certs/serial/DeployApp 0.09
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
323 TestStartStop/group/embed-certs/serial/SecondStart 5.29
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
329 TestStartStop/group/embed-certs/serial/Pause 0.1
331 TestStartStop/group/newest-cni/serial/FirstStart 9.92
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.09
340 TestStartStop/group/newest-cni/serial/SecondStart 5.26
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (13.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-629000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-629000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.577169375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d0b073bf-6bde-4752-99ee-c7258ff03fd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-629000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d291695e-0015-4559-bd3f-2c0cab18059f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"4097f369-fbf5-4136-83f6-0eb88460f2e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig"}}
	{"specversion":"1.0","id":"a6127fc7-806b-446d-a756-abef2f9353a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"80b02ead-339f-41f0-86d7-85bf998f3b89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"846a284a-378b-4514-8f55-b5016573e229","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube"}}
	{"specversion":"1.0","id":"f7fa9136-5aa6-4924-a7b4-dc68c5d7793f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"2e4a2238-5b70-41b8-89c2-83bb39b7e16f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc783728-1d12-4554-9593-35ad4fc54c09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"aeaee5bd-7b47-42d9-a5c5-a4b38ed4d812","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8cfb462a-836a-478d-94a3-8b4515406f54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-629000\" primary control-plane node in \"download-only-629000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d311960c-d156-41b1-832f-149efb754df2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a6f93f86-f618-4bbd-b5b3-e02c171f78ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106e49780 0x106e49780 0x106e49780 0x106e49780 0x106e49780 0x106e49780 0x106e49780] Decompressors:map[bz2:0x140004e1a50 gz:0x140004e1a58 tar:0x140004e1a00 tar.bz2:0x140004e1a10 tar.gz:0x140004e1a20 tar.xz:0x140004e1a30 tar.zst:0x140004e1a40 tbz2:0x140004e1a10 tgz:0x14
0004e1a20 txz:0x140004e1a30 tzst:0x140004e1a40 xz:0x140004e1a60 zip:0x140004e1a70 zst:0x140004e1a68] Getters:map[file:0x140003f4810 http:0x1400079c3c0 https:0x1400079c410] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"240ea753-3bc3-4c46-90de-64118e18c75b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 11:38:13.631328    1620 out.go:345] Setting OutFile to fd 1 ...
	I0919 11:38:13.631467    1620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:38:13.631470    1620 out.go:358] Setting ErrFile to fd 2...
	I0919 11:38:13.631472    1620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:38:13.631592    1620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	W0919 11:38:13.631683    1620 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19664-1099/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19664-1099/.minikube/config/config.json: no such file or directory
	I0919 11:38:13.632891    1620 out.go:352] Setting JSON to true
	I0919 11:38:13.650992    1620 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":458,"bootTime":1726770635,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 11:38:13.651047    1620 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 11:38:13.657258    1620 out.go:97] [download-only-629000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 11:38:13.657418    1620 notify.go:220] Checking for updates...
	W0919 11:38:13.657467    1620 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 11:38:13.660116    1620 out.go:169] MINIKUBE_LOCATION=19664
	I0919 11:38:13.663194    1620 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 11:38:13.667399    1620 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 11:38:13.670128    1620 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 11:38:13.673144    1620 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	W0919 11:38:13.679218    1620 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 11:38:13.679438    1620 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 11:38:13.684171    1620 out.go:97] Using the qemu2 driver based on user configuration
	I0919 11:38:13.684188    1620 start.go:297] selected driver: qemu2
	I0919 11:38:13.684202    1620 start.go:901] validating driver "qemu2" against <nil>
	I0919 11:38:13.684281    1620 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 11:38:13.687136    1620 out.go:169] Automatically selected the socket_vmnet network
	I0919 11:38:13.693341    1620 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0919 11:38:13.693424    1620 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 11:38:13.693456    1620 cni.go:84] Creating CNI manager for ""
	I0919 11:38:13.693495    1620 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 11:38:13.693553    1620 start.go:340] cluster config:
	{Name:download-only-629000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 11:38:13.698911    1620 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 11:38:13.703497    1620 out.go:97] Downloading VM boot image ...
	I0919 11:38:13.703513    1620 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso
	I0919 11:38:20.115451    1620 out.go:97] Starting "download-only-629000" primary control-plane node in "download-only-629000" cluster
	I0919 11:38:20.115469    1620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0919 11:38:20.177351    1620 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0919 11:38:20.177363    1620 cache.go:56] Caching tarball of preloaded images
	I0919 11:38:20.177507    1620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0919 11:38:20.180351    1620 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0919 11:38:20.180370    1620 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 11:38:20.276620    1620 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0919 11:38:25.893014    1620 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 11:38:25.893160    1620 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 11:38:26.588978    1620 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0919 11:38:26.589182    1620 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/download-only-629000/config.json ...
	I0919 11:38:26.589199    1620 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/download-only-629000/config.json: {Name:mkb0fe49d0d203e8cbca1874a28797fe699f16a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:38:26.589435    1620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0919 11:38:26.589626    1620 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0919 11:38:27.126190    1620 out.go:193] 
	W0919 11:38:27.135191    1620 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106e49780 0x106e49780 0x106e49780 0x106e49780 0x106e49780 0x106e49780 0x106e49780] Decompressors:map[bz2:0x140004e1a50 gz:0x140004e1a58 tar:0x140004e1a00 tar.bz2:0x140004e1a10 tar.gz:0x140004e1a20 tar.xz:0x140004e1a30 tar.zst:0x140004e1a40 tbz2:0x140004e1a10 tgz:0x140004e1a20 txz:0x140004e1a30 tzst:0x140004e1a40 xz:0x140004e1a60 zip:0x140004e1a70 zst:0x140004e1a68] Getters:map[file:0x140003f4810 http:0x1400079c3c0 https:0x1400079c410] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0919 11:38:27.135220    1620 out_reason.go:110] 
	W0919 11:38:27.144272    1620 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 11:38:27.148228    1620 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-629000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (13.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.26s)

                                                
                                                
=== RUN   TestBinaryMirror
I0919 11:38:36.580556    1618 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-071000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-071000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 : exit status 40 (161.2405ms)

                                                
                                                
-- stdout --
	* [binary-mirror-071000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-071000" primary control-plane node in "binary-mirror-071000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 11:38:36.639710    1694 out.go:345] Setting OutFile to fd 1 ...
	I0919 11:38:36.639836    1694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:38:36.639843    1694 out.go:358] Setting ErrFile to fd 2...
	I0919 11:38:36.639845    1694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:38:36.639972    1694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 11:38:36.640983    1694 out.go:352] Setting JSON to false
	I0919 11:38:36.657247    1694 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":481,"bootTime":1726770635,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 11:38:36.657321    1694 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 11:38:36.662021    1694 out.go:177] * [binary-mirror-071000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 11:38:36.669229    1694 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 11:38:36.669296    1694 notify.go:220] Checking for updates...
	I0919 11:38:36.675167    1694 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 11:38:36.678197    1694 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 11:38:36.679625    1694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 11:38:36.683140    1694 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 11:38:36.686348    1694 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 11:38:36.691046    1694 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 11:38:36.694151    1694 start.go:297] selected driver: qemu2
	I0919 11:38:36.694158    1694 start.go:901] validating driver "qemu2" against <nil>
	I0919 11:38:36.694210    1694 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 11:38:36.697111    1694 out.go:177] * Automatically selected the socket_vmnet network
	I0919 11:38:36.702405    1694 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0919 11:38:36.702522    1694 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 11:38:36.702542    1694 cni.go:84] Creating CNI manager for ""
	I0919 11:38:36.702573    1694 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 11:38:36.702581    1694 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 11:38:36.702624    1694 start.go:340] cluster config:
	{Name:binary-mirror-071000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-071000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49312 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 11:38:36.706158    1694 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 11:38:36.715152    1694 out.go:177] * Starting "binary-mirror-071000" primary control-plane node in "binary-mirror-071000" cluster
	I0919 11:38:36.719124    1694 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 11:38:36.719141    1694 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 11:38:36.719148    1694 cache.go:56] Caching tarball of preloaded images
	I0919 11:38:36.719225    1694 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 11:38:36.719230    1694 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 11:38:36.719454    1694 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/binary-mirror-071000/config.json ...
	I0919 11:38:36.719465    1694 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/binary-mirror-071000/config.json: {Name:mk0ed3106bfbb4757cc7f588cac6a93f99d82de6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:38:36.719834    1694 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 11:38:36.719884    1694 download.go:107] Downloading: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0919 11:38:36.750354    1694 out.go:201] 
	W0919 11:38:36.753198    1694 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1056e1780 0x1056e1780 0x1056e1780 0x1056e1780 0x1056e1780 0x1056e1780 0x1056e1780] Decompressors:map[bz2:0x14000696160 gz:0x14000696168 tar:0x14000696110 tar.bz2:0x14000696120 tar.gz:0x14000696130 tar.xz:0x14000696140 tar.zst:0x14000696150 tbz2:0x14000696120 tgz:0x14000696130 txz:0x14000696140 tzst:0x14000696150 xz:0x14000696170 zip:0x140006961a0 zst:0x14000696178] Getters:map[file:0x14000465bb0 http:0x140007d4f00 https:0x140007d4f50] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1056e1780 0x1056e1780 0x1056e1780 0x1056e1780 0x1056e1780 0x1056e1780 0x1056e1780] Decompressors:map[bz2:0x14000696160 gz:0x14000696168 tar:0x14000696110 tar.bz2:0x14000696120 tar.gz:0x14000696130 tar.xz:0x14000696140 tar.zst:0x14000696150 tbz2:0x14000696120 tgz:0x14000696130 txz:0x14000696140 tzst:0x14000696150 xz:0x14000696170 zip:0x140006961a0 zst:0x14000696178] Getters:map[file:0x14000465bb0 http:0x140007d4f00 https:0x140007d4f50] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0919 11:38:36.753206    1694 out.go:270] * 
	* 
	W0919 11:38:36.753671    1694 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 11:38:36.768178    1694 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-071000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49312" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-071000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-071000
--- FAIL: TestBinaryMirror (0.26s)

                                                
                                    
x
+
TestOffline (9.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-348000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-348000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.829477333s)

                                                
                                                
-- stdout --
	* [offline-docker-348000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-348000" primary control-plane node in "offline-docker-348000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-348000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:17:30.080416    4304 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:17:30.080564    4304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:17:30.080568    4304 out.go:358] Setting ErrFile to fd 2...
	I0919 12:17:30.080570    4304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:17:30.080696    4304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:17:30.081954    4304 out.go:352] Setting JSON to false
	I0919 12:17:30.099854    4304 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2815,"bootTime":1726770635,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:17:30.099952    4304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:17:30.105392    4304 out.go:177] * [offline-docker-348000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:17:30.113331    4304 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:17:30.113371    4304 notify.go:220] Checking for updates...
	I0919 12:17:30.120333    4304 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:17:30.123351    4304 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:17:30.126211    4304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:17:30.129247    4304 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:17:30.132276    4304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:17:30.135566    4304 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:17:30.135636    4304 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:17:30.139208    4304 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:17:30.146232    4304 start.go:297] selected driver: qemu2
	I0919 12:17:30.146243    4304 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:17:30.146251    4304 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:17:30.148247    4304 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:17:30.151248    4304 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:17:30.154320    4304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:17:30.154340    4304 cni.go:84] Creating CNI manager for ""
	I0919 12:17:30.154361    4304 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:17:30.154368    4304 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 12:17:30.154400    4304 start.go:340] cluster config:
	{Name:offline-docker-348000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-348000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:17:30.157982    4304 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:17:30.165256    4304 out.go:177] * Starting "offline-docker-348000" primary control-plane node in "offline-docker-348000" cluster
	I0919 12:17:30.169248    4304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:17:30.169276    4304 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:17:30.169286    4304 cache.go:56] Caching tarball of preloaded images
	I0919 12:17:30.169369    4304 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:17:30.169374    4304 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:17:30.169445    4304 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/offline-docker-348000/config.json ...
	I0919 12:17:30.169455    4304 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/offline-docker-348000/config.json: {Name:mkfe6d23a6938b3863c7de1a45dddab8cb3b14d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:17:30.169684    4304 start.go:360] acquireMachinesLock for offline-docker-348000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:17:30.169718    4304 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "offline-docker-348000"
	I0919 12:17:30.169729    4304 start.go:93] Provisioning new machine with config: &{Name:offline-docker-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-348000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:17:30.169767    4304 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:17:30.178244    4304 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 12:17:30.194210    4304 start.go:159] libmachine.API.Create for "offline-docker-348000" (driver="qemu2")
	I0919 12:17:30.194242    4304 client.go:168] LocalClient.Create starting
	I0919 12:17:30.194324    4304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:17:30.194355    4304 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:30.194366    4304 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:30.194416    4304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:17:30.194440    4304 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:30.194449    4304 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:30.194835    4304 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:17:30.352290    4304 main.go:141] libmachine: Creating SSH key...
	I0919 12:17:30.453645    4304 main.go:141] libmachine: Creating Disk image...
	I0919 12:17:30.453653    4304 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:17:30.453838    4304 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/disk.qcow2
	I0919 12:17:30.464771    4304 main.go:141] libmachine: STDOUT: 
	I0919 12:17:30.464812    4304 main.go:141] libmachine: STDERR: 
	I0919 12:17:30.464878    4304 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/disk.qcow2 +20000M
	I0919 12:17:30.478503    4304 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:17:30.478520    4304 main.go:141] libmachine: STDERR: 
	I0919 12:17:30.478543    4304 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/disk.qcow2
	I0919 12:17:30.478549    4304 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:17:30.478561    4304 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:17:30.478590    4304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:bf:13:cf:bd:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/disk.qcow2
	I0919 12:17:30.480232    4304 main.go:141] libmachine: STDOUT: 
	I0919 12:17:30.480329    4304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:17:30.480349    4304 client.go:171] duration metric: took 286.109166ms to LocalClient.Create
	I0919 12:17:32.482366    4304 start.go:128] duration metric: took 2.3126555s to createHost
	I0919 12:17:32.482388    4304 start.go:83] releasing machines lock for "offline-docker-348000", held for 2.312728042s
	W0919 12:17:32.482411    4304 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:17:32.495204    4304 out.go:177] * Deleting "offline-docker-348000" in qemu2 ...
	W0919 12:17:32.508791    4304 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:17:32.508803    4304 start.go:729] Will try again in 5 seconds ...
	I0919 12:17:37.510854    4304 start.go:360] acquireMachinesLock for offline-docker-348000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:17:37.511393    4304 start.go:364] duration metric: took 437.375µs to acquireMachinesLock for "offline-docker-348000"
	I0919 12:17:37.511522    4304 start.go:93] Provisioning new machine with config: &{Name:offline-docker-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-348000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:17:37.511837    4304 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:17:37.524446    4304 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 12:17:37.574928    4304 start.go:159] libmachine.API.Create for "offline-docker-348000" (driver="qemu2")
	I0919 12:17:37.574980    4304 client.go:168] LocalClient.Create starting
	I0919 12:17:37.575085    4304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:17:37.575154    4304 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:37.575171    4304 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:37.575239    4304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:17:37.575284    4304 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:37.575294    4304 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:37.575856    4304 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:17:37.754618    4304 main.go:141] libmachine: Creating SSH key...
	I0919 12:17:37.814706    4304 main.go:141] libmachine: Creating Disk image...
	I0919 12:17:37.814712    4304 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:17:37.814882    4304 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/disk.qcow2
	I0919 12:17:37.824167    4304 main.go:141] libmachine: STDOUT: 
	I0919 12:17:37.824188    4304 main.go:141] libmachine: STDERR: 
	I0919 12:17:37.824246    4304 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/disk.qcow2 +20000M
	I0919 12:17:37.832005    4304 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:17:37.832020    4304 main.go:141] libmachine: STDERR: 
	I0919 12:17:37.832030    4304 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/disk.qcow2
	I0919 12:17:37.832044    4304 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:17:37.832052    4304 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:17:37.832085    4304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:6a:67:6e:ed:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/offline-docker-348000/disk.qcow2
	I0919 12:17:37.833596    4304 main.go:141] libmachine: STDOUT: 
	I0919 12:17:37.833611    4304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:17:37.833624    4304 client.go:171] duration metric: took 258.646542ms to LocalClient.Create
	I0919 12:17:39.835748    4304 start.go:128] duration metric: took 2.323926458s to createHost
	I0919 12:17:39.835855    4304 start.go:83] releasing machines lock for "offline-docker-348000", held for 2.324470542s
	W0919 12:17:39.836161    4304 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-348000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-348000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:17:39.849795    4304 out.go:201] 
	W0919 12:17:39.853880    4304 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:17:39.853940    4304 out.go:270] * 
	* 
	W0919 12:17:39.856969    4304 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:17:39.866828    4304 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-348000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-19 12:17:39.882811 -0700 PDT m=+2366.404183834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-348000 -n offline-docker-348000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-348000 -n offline-docker-348000: exit status 7 (69.94125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-348000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-348000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-348000
--- FAIL: TestOffline (9.99s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.38ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I0919 11:50:35.818674    1618 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 11:50:35.818682    1618 kapi.go:107] duration metric: took 3.089458ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "registry-66c9cd494c-scwnh" [8bdf6123-9742-4fb5-a3a6-2eb970734d28] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011814875s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bdzcj" [08fd319b-dcb5-4641-aaf4-0cde96c2f1c6] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009557041s
addons_test.go:342: (dbg) Run:  kubectl --context addons-700000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-700000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-700000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.06295725s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-700000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 ip
2024/09/19 11:51:46 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-700000 -n addons-700000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-629000 | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT |                     |
	|         | -p download-only-629000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT | 19 Sep 24 11:38 PDT |
	| delete  | -p download-only-629000              | download-only-629000 | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT | 19 Sep 24 11:38 PDT |
	| start   | -o=json --download-only              | download-only-556000 | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT |                     |
	|         | -p download-only-556000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT | 19 Sep 24 11:38 PDT |
	| delete  | -p download-only-556000              | download-only-556000 | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT | 19 Sep 24 11:38 PDT |
	| delete  | -p download-only-629000              | download-only-629000 | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT | 19 Sep 24 11:38 PDT |
	| delete  | -p download-only-556000              | download-only-556000 | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT | 19 Sep 24 11:38 PDT |
	| start   | --download-only -p                   | binary-mirror-071000 | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT |                     |
	|         | binary-mirror-071000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-071000              | binary-mirror-071000 | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT | 19 Sep 24 11:38 PDT |
	| addons  | disable dashboard -p                 | addons-700000        | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT |                     |
	|         | addons-700000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-700000        | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT |                     |
	|         | addons-700000                        |                      |         |         |                     |                     |
	| start   | -p addons-700000 --wait=true         | addons-700000        | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT | 19 Sep 24 11:41 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-700000 addons disable         | addons-700000        | jenkins | v1.34.0 | 19 Sep 24 11:42 PDT | 19 Sep 24 11:42 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-700000 addons                 | addons-700000        | jenkins | v1.34.0 | 19 Sep 24 11:51 PDT | 19 Sep 24 11:51 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-700000 addons                 | addons-700000        | jenkins | v1.34.0 | 19 Sep 24 11:51 PDT | 19 Sep 24 11:51 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-700000 addons disable         | addons-700000        | jenkins | v1.34.0 | 19 Sep 24 11:51 PDT | 19 Sep 24 11:51 PDT |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| ip      | addons-700000 ip                     | addons-700000        | jenkins | v1.34.0 | 19 Sep 24 11:51 PDT | 19 Sep 24 11:51 PDT |
	| addons  | addons-700000 addons disable         | addons-700000        | jenkins | v1.34.0 | 19 Sep 24 11:51 PDT | 19 Sep 24 11:51 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 11:38:36
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 11:38:36.936172    1708 out.go:345] Setting OutFile to fd 1 ...
	I0919 11:38:36.936308    1708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:38:36.936312    1708 out.go:358] Setting ErrFile to fd 2...
	I0919 11:38:36.936315    1708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:38:36.936453    1708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 11:38:36.937494    1708 out.go:352] Setting JSON to false
	I0919 11:38:36.953493    1708 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":481,"bootTime":1726770635,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 11:38:36.953558    1708 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 11:38:36.958189    1708 out.go:177] * [addons-700000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 11:38:36.964975    1708 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 11:38:36.965033    1708 notify.go:220] Checking for updates...
	I0919 11:38:36.972163    1708 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 11:38:36.973500    1708 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 11:38:36.976145    1708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 11:38:36.979138    1708 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 11:38:36.982226    1708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 11:38:36.985381    1708 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 11:38:36.990093    1708 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 11:38:36.997215    1708 start.go:297] selected driver: qemu2
	I0919 11:38:36.997224    1708 start.go:901] validating driver "qemu2" against <nil>
	I0919 11:38:36.997233    1708 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 11:38:36.999413    1708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 11:38:37.002177    1708 out.go:177] * Automatically selected the socket_vmnet network
	I0919 11:38:37.005273    1708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 11:38:37.005295    1708 cni.go:84] Creating CNI manager for ""
	I0919 11:38:37.005317    1708 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 11:38:37.005321    1708 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 11:38:37.005361    1708 start.go:340] cluster config:
	{Name:addons-700000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 11:38:37.008951    1708 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 11:38:37.017141    1708 out.go:177] * Starting "addons-700000" primary control-plane node in "addons-700000" cluster
	I0919 11:38:37.020159    1708 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 11:38:37.020181    1708 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 11:38:37.020189    1708 cache.go:56] Caching tarball of preloaded images
	I0919 11:38:37.020260    1708 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 11:38:37.020266    1708 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 11:38:37.020480    1708 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/config.json ...
	I0919 11:38:37.020492    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/config.json: {Name:mkcadc426698287dfb27e41b3be6b05220276244 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:38:37.020870    1708 start.go:360] acquireMachinesLock for addons-700000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 11:38:37.020935    1708 start.go:364] duration metric: took 58.583µs to acquireMachinesLock for "addons-700000"
	I0919 11:38:37.020945    1708 start.go:93] Provisioning new machine with config: &{Name:addons-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 11:38:37.020971    1708 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 11:38:37.028038    1708 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0919 11:38:37.254665    1708 start.go:159] libmachine.API.Create for "addons-700000" (driver="qemu2")
	I0919 11:38:37.254709    1708 client.go:168] LocalClient.Create starting
	I0919 11:38:37.254874    1708 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 11:38:37.293745    1708 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 11:38:37.612012    1708 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 11:38:38.382977    1708 main.go:141] libmachine: Creating SSH key...
	I0919 11:38:38.552758    1708 main.go:141] libmachine: Creating Disk image...
	I0919 11:38:38.552764    1708 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 11:38:38.553038    1708 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/disk.qcow2
	I0919 11:38:38.572549    1708 main.go:141] libmachine: STDOUT: 
	I0919 11:38:38.572582    1708 main.go:141] libmachine: STDERR: 
	I0919 11:38:38.572659    1708 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/disk.qcow2 +20000M
	I0919 11:38:38.580771    1708 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 11:38:38.580785    1708 main.go:141] libmachine: STDERR: 
	I0919 11:38:38.580804    1708 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/disk.qcow2
	I0919 11:38:38.580809    1708 main.go:141] libmachine: Starting QEMU VM...
	I0919 11:38:38.580845    1708 qemu.go:418] Using hvf for hardware acceleration
	I0919 11:38:38.580874    1708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:6a:da:3a:b7:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/disk.qcow2
	I0919 11:38:38.638769    1708 main.go:141] libmachine: STDOUT: 
	I0919 11:38:38.638795    1708 main.go:141] libmachine: STDERR: 
	I0919 11:38:38.638799    1708 main.go:141] libmachine: Attempt 0
	I0919 11:38:38.638812    1708 main.go:141] libmachine: Searching for ce:6a:da:3a:b7:89 in /var/db/dhcpd_leases ...
	I0919 11:38:38.638871    1708 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0919 11:38:38.638891    1708 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66edc109}
	I0919 11:38:40.641086    1708 main.go:141] libmachine: Attempt 1
	I0919 11:38:40.641170    1708 main.go:141] libmachine: Searching for ce:6a:da:3a:b7:89 in /var/db/dhcpd_leases ...
	I0919 11:38:40.641581    1708 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0919 11:38:40.641633    1708 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66edc109}
	I0919 11:38:42.644025    1708 main.go:141] libmachine: Attempt 2
	I0919 11:38:42.644223    1708 main.go:141] libmachine: Searching for ce:6a:da:3a:b7:89 in /var/db/dhcpd_leases ...
	I0919 11:38:42.644535    1708 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0919 11:38:42.644598    1708 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66edc109}
	I0919 11:38:44.646745    1708 main.go:141] libmachine: Attempt 3
	I0919 11:38:44.646775    1708 main.go:141] libmachine: Searching for ce:6a:da:3a:b7:89 in /var/db/dhcpd_leases ...
	I0919 11:38:44.646877    1708 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0919 11:38:44.646901    1708 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66edc109}
	I0919 11:38:46.648901    1708 main.go:141] libmachine: Attempt 4
	I0919 11:38:46.648912    1708 main.go:141] libmachine: Searching for ce:6a:da:3a:b7:89 in /var/db/dhcpd_leases ...
	I0919 11:38:46.648946    1708 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0919 11:38:46.648952    1708 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66edc109}
	I0919 11:38:48.650956    1708 main.go:141] libmachine: Attempt 5
	I0919 11:38:48.650978    1708 main.go:141] libmachine: Searching for ce:6a:da:3a:b7:89 in /var/db/dhcpd_leases ...
	I0919 11:38:48.651007    1708 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0919 11:38:48.651014    1708 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66edc109}
	I0919 11:38:50.653051    1708 main.go:141] libmachine: Attempt 6
	I0919 11:38:50.653069    1708 main.go:141] libmachine: Searching for ce:6a:da:3a:b7:89 in /var/db/dhcpd_leases ...
	I0919 11:38:50.653140    1708 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0919 11:38:50.653150    1708 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66edc109}
	I0919 11:38:52.655186    1708 main.go:141] libmachine: Attempt 7
	I0919 11:38:52.655208    1708 main.go:141] libmachine: Searching for ce:6a:da:3a:b7:89 in /var/db/dhcpd_leases ...
	I0919 11:38:52.655346    1708 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0919 11:38:52.655359    1708 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:6a:da:3a:b7:89 ID:1,ce:6a:da:3a:b7:89 Lease:0x66edc13b}
	I0919 11:38:52.655362    1708 main.go:141] libmachine: Found match: ce:6a:da:3a:b7:89
	I0919 11:38:52.655374    1708 main.go:141] libmachine: IP: 192.168.105.2
	I0919 11:38:52.655378    1708 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0919 11:38:53.673421    1708 machine.go:93] provisionDockerMachine start ...
	I0919 11:38:53.674917    1708 main.go:141] libmachine: Using SSH client type: native
	I0919 11:38:53.675355    1708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029cd190] 0x1029cf9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 11:38:53.675370    1708 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 11:38:53.753164    1708 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0919 11:38:53.753201    1708 buildroot.go:166] provisioning hostname "addons-700000"
	I0919 11:38:53.753358    1708 main.go:141] libmachine: Using SSH client type: native
	I0919 11:38:53.753615    1708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029cd190] 0x1029cf9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 11:38:53.753627    1708 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-700000 && echo "addons-700000" | sudo tee /etc/hostname
	I0919 11:38:53.826329    1708 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-700000
	
	I0919 11:38:53.826448    1708 main.go:141] libmachine: Using SSH client type: native
	I0919 11:38:53.826641    1708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029cd190] 0x1029cf9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 11:38:53.826653    1708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-700000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-700000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-700000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 11:38:53.889199    1708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 11:38:53.889215    1708 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19664-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19664-1099/.minikube}
	I0919 11:38:53.889231    1708 buildroot.go:174] setting up certificates
	I0919 11:38:53.889237    1708 provision.go:84] configureAuth start
	I0919 11:38:53.889246    1708 provision.go:143] copyHostCerts
	I0919 11:38:53.889369    1708 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.pem (1078 bytes)
	I0919 11:38:53.889657    1708 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19664-1099/.minikube/cert.pem (1123 bytes)
	I0919 11:38:53.889819    1708 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19664-1099/.minikube/key.pem (1679 bytes)
	I0919 11:38:53.889928    1708 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca-key.pem org=jenkins.addons-700000 san=[127.0.0.1 192.168.105.2 addons-700000 localhost minikube]
	I0919 11:38:54.085529    1708 provision.go:177] copyRemoteCerts
	I0919 11:38:54.085596    1708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 11:38:54.085605    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:38:54.116518    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 11:38:54.125183    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 11:38:54.133363    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 11:38:54.141686    1708 provision.go:87] duration metric: took 252.435417ms to configureAuth
	I0919 11:38:54.141696    1708 buildroot.go:189] setting minikube options for container-runtime
	I0919 11:38:54.141799    1708 config.go:182] Loaded profile config "addons-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 11:38:54.141843    1708 main.go:141] libmachine: Using SSH client type: native
	I0919 11:38:54.141928    1708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029cd190] 0x1029cf9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 11:38:54.141934    1708 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 11:38:54.197381    1708 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 11:38:54.197389    1708 buildroot.go:70] root file system type: tmpfs
	I0919 11:38:54.197440    1708 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 11:38:54.197491    1708 main.go:141] libmachine: Using SSH client type: native
	I0919 11:38:54.197588    1708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029cd190] 0x1029cf9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 11:38:54.197621    1708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 11:38:54.255879    1708 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 11:38:54.255930    1708 main.go:141] libmachine: Using SSH client type: native
	I0919 11:38:54.256031    1708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029cd190] 0x1029cf9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 11:38:54.256040    1708 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 11:38:55.638500    1708 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0919 11:38:55.638512    1708 machine.go:96] duration metric: took 1.965112042s to provisionDockerMachine
	I0919 11:38:55.638518    1708 client.go:171] duration metric: took 18.3842465s to LocalClient.Create
	I0919 11:38:55.638542    1708 start.go:167] duration metric: took 18.38432075s to libmachine.API.Create "addons-700000"
	I0919 11:38:55.638549    1708 start.go:293] postStartSetup for "addons-700000" (driver="qemu2")
	I0919 11:38:55.638556    1708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 11:38:55.638627    1708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 11:38:55.638637    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:38:55.670252    1708 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 11:38:55.671918    1708 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 11:38:55.671927    1708 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19664-1099/.minikube/addons for local assets ...
	I0919 11:38:55.672024    1708 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19664-1099/.minikube/files for local assets ...
	I0919 11:38:55.672055    1708 start.go:296] duration metric: took 33.503792ms for postStartSetup
	I0919 11:38:55.672470    1708 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/config.json ...
	I0919 11:38:55.672661    1708 start.go:128] duration metric: took 18.652134542s to createHost
	I0919 11:38:55.672694    1708 main.go:141] libmachine: Using SSH client type: native
	I0919 11:38:55.672782    1708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029cd190] 0x1029cf9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 11:38:55.672786    1708 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 11:38:55.727399    1708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726771136.159280752
	
	I0919 11:38:55.727408    1708 fix.go:216] guest clock: 1726771136.159280752
	I0919 11:38:55.727412    1708 fix.go:229] Guest: 2024-09-19 11:38:56.159280752 -0700 PDT Remote: 2024-09-19 11:38:55.672664 -0700 PDT m=+18.756175543 (delta=486.616752ms)
	I0919 11:38:55.727425    1708 fix.go:200] guest clock delta is within tolerance: 486.616752ms
	I0919 11:38:55.727428    1708 start.go:83] releasing machines lock for "addons-700000", held for 18.706937s
	I0919 11:38:55.727720    1708 ssh_runner.go:195] Run: cat /version.json
	I0919 11:38:55.727723    1708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 11:38:55.727729    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:38:55.727740    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:38:55.759141    1708 ssh_runner.go:195] Run: systemctl --version
	I0919 11:38:55.801593    1708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 11:38:55.803731    1708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 11:38:55.803771    1708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 11:38:55.809985    1708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 11:38:55.809993    1708 start.go:495] detecting cgroup driver to use...
	I0919 11:38:55.810112    1708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 11:38:55.816832    1708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0919 11:38:55.820867    1708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 11:38:55.824649    1708 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 11:38:55.824681    1708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 11:38:55.828624    1708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 11:38:55.832556    1708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 11:38:55.836483    1708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 11:38:55.840473    1708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 11:38:55.844361    1708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 11:38:55.848292    1708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 11:38:55.852168    1708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 11:38:55.856279    1708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 11:38:55.859678    1708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 11:38:55.863058    1708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 11:38:55.953687    1708 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 11:38:55.964359    1708 start.go:495] detecting cgroup driver to use...
	I0919 11:38:55.964431    1708 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 11:38:55.970655    1708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 11:38:55.976245    1708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 11:38:55.984012    1708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 11:38:55.989479    1708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 11:38:55.994524    1708 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 11:38:56.032508    1708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 11:38:56.038413    1708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 11:38:56.044871    1708 ssh_runner.go:195] Run: which cri-dockerd
	I0919 11:38:56.046234    1708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 11:38:56.049398    1708 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0919 11:38:56.055228    1708 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 11:38:56.142809    1708 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 11:38:56.208138    1708 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 11:38:56.208192    1708 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0919 11:38:56.214292    1708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 11:38:56.297292    1708 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 11:38:58.482620    1708 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.185362583s)
	I0919 11:38:58.482689    1708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 11:38:58.488234    1708 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 11:38:58.494946    1708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 11:38:58.500049    1708 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 11:38:58.569414    1708 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 11:38:58.636725    1708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 11:38:58.704240    1708 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 11:38:58.711377    1708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 11:38:58.716631    1708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 11:38:58.789174    1708 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 11:38:58.814934    1708 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 11:38:58.815047    1708 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 11:38:58.817330    1708 start.go:563] Will wait 60s for crictl version
	I0919 11:38:58.817376    1708 ssh_runner.go:195] Run: which crictl
	I0919 11:38:58.819879    1708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 11:38:58.839329    1708 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0919 11:38:58.839405    1708 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 11:38:58.855133    1708 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 11:38:58.871333    1708 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0919 11:38:58.871417    1708 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0919 11:38:58.872926    1708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 11:38:58.877267    1708 kubeadm.go:883] updating cluster {Name:addons-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 11:38:58.877313    1708 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 11:38:58.877368    1708 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 11:38:58.882680    1708 docker.go:685] Got preloaded images: 
	I0919 11:38:58.882690    1708 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0919 11:38:58.882735    1708 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 11:38:58.886141    1708 ssh_runner.go:195] Run: which lz4
	I0919 11:38:58.887596    1708 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 11:38:58.888913    1708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 11:38:58.888923    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0919 11:39:00.136948    1708 docker.go:649] duration metric: took 1.249423208s to copy over tarball
	I0919 11:39:00.137016    1708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 11:39:01.091954    1708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 11:39:01.107061    1708 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 11:39:01.110542    1708 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0919 11:39:01.116448    1708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 11:39:01.202786    1708 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 11:39:03.903645    1708 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.700906208s)
	I0919 11:39:03.903781    1708 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 11:39:03.911453    1708 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 11:39:03.911463    1708 cache_images.go:84] Images are preloaded, skipping loading
	I0919 11:39:03.911474    1708 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0919 11:39:03.911543    1708 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-700000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 11:39:03.911621    1708 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 11:39:03.933606    1708 cni.go:84] Creating CNI manager for ""
	I0919 11:39:03.933619    1708 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 11:39:03.933640    1708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 11:39:03.933651    1708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-700000 NodeName:addons-700000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 11:39:03.933713    1708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-700000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 11:39:03.933779    1708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 11:39:03.937458    1708 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 11:39:03.937496    1708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 11:39:03.940918    1708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 11:39:03.946886    1708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 11:39:03.952588    1708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0919 11:39:03.958740    1708 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0919 11:39:03.960262    1708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 11:39:03.964556    1708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 11:39:04.054620    1708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 11:39:04.062444    1708 certs.go:68] Setting up /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000 for IP: 192.168.105.2
	I0919 11:39:04.062454    1708 certs.go:194] generating shared ca certs ...
	I0919 11:39:04.062462    1708 certs.go:226] acquiring lock for ca certs: {Name:mk207a98b59455406f5fa19947ac5c81f6753b77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:04.062676    1708 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.key
	I0919 11:39:04.235118    1708 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt ...
	I0919 11:39:04.235130    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt: {Name:mk913ee109cbc0802ba2a1df6e57d9b1ab60a4c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:04.235495    1708 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.key ...
	I0919 11:39:04.235499    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.key: {Name:mk1405976ae8173a1007ef592ad5488f09208a41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:04.235634    1708 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.key
	I0919 11:39:04.359201    1708 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.crt ...
	I0919 11:39:04.359210    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.crt: {Name:mke9502ff077e032358a44cc93c79e020a23dc70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:04.359468    1708 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.key ...
	I0919 11:39:04.359472    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.key: {Name:mk44012f40dcfe03068d2184cc8767135b07822e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:04.359606    1708 certs.go:256] generating profile certs ...
	I0919 11:39:04.359644    1708 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.key
	I0919 11:39:04.359651    1708 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt with IP's: []
	I0919 11:39:04.439123    1708 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt ...
	I0919 11:39:04.439127    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: {Name:mkf55c0ac1d2f51e59896760b2a60d360ccd7317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:04.439271    1708 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.key ...
	I0919 11:39:04.439274    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.key: {Name:mkeab5e7bf329ffc1c58261202278dd52d861049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:04.439382    1708 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/apiserver.key.1ebc3fb4
	I0919 11:39:04.439391    1708 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/apiserver.crt.1ebc3fb4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0919 11:39:04.597542    1708 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/apiserver.crt.1ebc3fb4 ...
	I0919 11:39:04.597547    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/apiserver.crt.1ebc3fb4: {Name:mkb82bd085d18bfbd16f2ce5729494cc305a6065 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:04.597707    1708 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/apiserver.key.1ebc3fb4 ...
	I0919 11:39:04.597711    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/apiserver.key.1ebc3fb4: {Name:mkdf3a2905151316b09d0d543d448b555d843729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:04.597831    1708 certs.go:381] copying /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/apiserver.crt.1ebc3fb4 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/apiserver.crt
	I0919 11:39:04.598118    1708 certs.go:385] copying /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/apiserver.key.1ebc3fb4 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/apiserver.key
	I0919 11:39:04.598228    1708 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/proxy-client.key
	I0919 11:39:04.598242    1708 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/proxy-client.crt with IP's: []
	I0919 11:39:04.837384    1708 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/proxy-client.crt ...
	I0919 11:39:04.837398    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/proxy-client.crt: {Name:mk2620c0fa4ad1a60dfee29691cf6974ba519f1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:04.837705    1708 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/proxy-client.key ...
	I0919 11:39:04.837711    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/proxy-client.key: {Name:mk587b0de2f41e44a775b2a29ff912d5f2ed412a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:04.837968    1708 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 11:39:04.837998    1708 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem (1078 bytes)
	I0919 11:39:04.838018    1708 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem (1123 bytes)
	I0919 11:39:04.838036    1708 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/key.pem (1679 bytes)
	I0919 11:39:04.838483    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 11:39:04.847414    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 11:39:04.855860    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 11:39:04.864366    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 11:39:04.872864    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 11:39:04.881227    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 11:39:04.889420    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 11:39:04.897751    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 11:39:04.906211    1708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 11:39:04.914373    1708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 11:39:04.921169    1708 ssh_runner.go:195] Run: openssl version
	I0919 11:39:04.923580    1708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 11:39:04.927395    1708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 11:39:04.928970    1708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0919 11:39:04.928992    1708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 11:39:04.931191    1708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 11:39:04.934732    1708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 11:39:04.936157    1708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 11:39:04.936197    1708 kubeadm.go:392] StartCluster: {Name:addons-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 11:39:04.936269    1708 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 11:39:04.941480    1708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 11:39:04.945143    1708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 11:39:04.948969    1708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 11:39:04.952747    1708 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 11:39:04.952753    1708 kubeadm.go:157] found existing configuration files:
	
	I0919 11:39:04.952780    1708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 11:39:04.956261    1708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 11:39:04.956292    1708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 11:39:04.960050    1708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 11:39:04.963516    1708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 11:39:04.963546    1708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 11:39:04.966899    1708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 11:39:04.969922    1708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 11:39:04.969950    1708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 11:39:04.973284    1708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 11:39:04.976648    1708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 11:39:04.976678    1708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 11:39:04.980516    1708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 11:39:05.002703    1708 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 11:39:05.002748    1708 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 11:39:05.039658    1708 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 11:39:05.039716    1708 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 11:39:05.039771    1708 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 11:39:05.044151    1708 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 11:39:05.051406    1708 out.go:235]   - Generating certificates and keys ...
	I0919 11:39:05.051449    1708 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 11:39:05.051496    1708 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 11:39:05.107469    1708 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 11:39:05.209484    1708 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 11:39:05.293689    1708 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 11:39:05.376957    1708 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 11:39:05.518460    1708 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 11:39:05.518533    1708 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-700000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0919 11:39:05.574421    1708 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 11:39:05.574483    1708 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-700000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0919 11:39:05.628559    1708 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 11:39:05.687628    1708 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 11:39:05.742357    1708 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 11:39:05.742393    1708 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 11:39:05.882523    1708 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 11:39:05.940055    1708 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 11:39:06.099074    1708 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 11:39:06.150289    1708 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 11:39:06.213361    1708 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 11:39:06.213671    1708 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 11:39:06.215046    1708 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 11:39:06.219326    1708 out.go:235]   - Booting up control plane ...
	I0919 11:39:06.219381    1708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 11:39:06.219419    1708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 11:39:06.219473    1708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 11:39:06.225976    1708 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 11:39:06.228723    1708 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 11:39:06.228748    1708 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 11:39:06.332521    1708 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 11:39:06.332590    1708 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 11:39:06.838354    1708 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.963833ms
	I0919 11:39:06.838596    1708 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 11:39:09.846407    1708 kubeadm.go:310] [api-check] The API server is healthy after 3.008766835s
	I0919 11:39:09.858670    1708 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 11:39:09.865952    1708 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 11:39:09.876267    1708 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 11:39:09.876446    1708 kubeadm.go:310] [mark-control-plane] Marking the node addons-700000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 11:39:09.881175    1708 kubeadm.go:310] [bootstrap-token] Using token: wdkxjo.86tkkqxa92mzmshs
	I0919 11:39:09.887777    1708 out.go:235]   - Configuring RBAC rules ...
	I0919 11:39:09.887849    1708 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 11:39:09.893091    1708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 11:39:09.897498    1708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 11:39:09.898656    1708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 11:39:09.899714    1708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 11:39:09.901007    1708 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 11:39:10.260214    1708 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 11:39:10.657496    1708 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 11:39:11.253039    1708 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 11:39:11.254230    1708 kubeadm.go:310] 
	I0919 11:39:11.254322    1708 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 11:39:11.254331    1708 kubeadm.go:310] 
	I0919 11:39:11.254461    1708 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 11:39:11.254486    1708 kubeadm.go:310] 
	I0919 11:39:11.254568    1708 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 11:39:11.254659    1708 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 11:39:11.254784    1708 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 11:39:11.254800    1708 kubeadm.go:310] 
	I0919 11:39:11.254891    1708 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 11:39:11.254924    1708 kubeadm.go:310] 
	I0919 11:39:11.255013    1708 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 11:39:11.255024    1708 kubeadm.go:310] 
	I0919 11:39:11.255108    1708 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 11:39:11.255267    1708 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 11:39:11.255384    1708 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 11:39:11.255391    1708 kubeadm.go:310] 
	I0919 11:39:11.255531    1708 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 11:39:11.255658    1708 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 11:39:11.255668    1708 kubeadm.go:310] 
	I0919 11:39:11.255821    1708 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wdkxjo.86tkkqxa92mzmshs \
	I0919 11:39:11.256000    1708 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e0c2857de0258e65a9bba263f6157106d84e898a6b55abbe378b8f48b6c815 \
	I0919 11:39:11.256035    1708 kubeadm.go:310] 	--control-plane 
	I0919 11:39:11.256048    1708 kubeadm.go:310] 
	I0919 11:39:11.256187    1708 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 11:39:11.256197    1708 kubeadm.go:310] 
	I0919 11:39:11.256338    1708 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wdkxjo.86tkkqxa92mzmshs \
	I0919 11:39:11.256505    1708 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e0c2857de0258e65a9bba263f6157106d84e898a6b55abbe378b8f48b6c815 
	I0919 11:39:11.257015    1708 kubeadm.go:310] W0919 18:39:05.433099    1592 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 11:39:11.257475    1708 kubeadm.go:310] W0919 18:39:05.433698    1592 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 11:39:11.257681    1708 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 11:39:11.257710    1708 cni.go:84] Creating CNI manager for ""
	I0919 11:39:11.257734    1708 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 11:39:11.261118    1708 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 11:39:11.268075    1708 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 11:39:11.279304    1708 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 11:39:11.291257    1708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 11:39:11.291395    1708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 11:39:11.291507    1708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-700000 minikube.k8s.io/updated_at=2024_09_19T11_39_11_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-700000 minikube.k8s.io/primary=true
	I0919 11:39:11.312051    1708 ops.go:34] apiserver oom_adj: -16
	I0919 11:39:11.360572    1708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 11:39:11.862721    1708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 11:39:12.362638    1708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 11:39:12.862660    1708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 11:39:13.362674    1708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 11:39:13.862568    1708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 11:39:14.362623    1708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 11:39:14.860903    1708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 11:39:15.362597    1708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 11:39:15.398490    1708 kubeadm.go:1113] duration metric: took 4.107318334s to wait for elevateKubeSystemPrivileges
	I0919 11:39:15.398508    1708 kubeadm.go:394] duration metric: took 10.462563208s to StartCluster
	I0919 11:39:15.398517    1708 settings.go:142] acquiring lock: {Name:mk40c96dc3647741b89517369d27068bccfc0e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:15.398693    1708 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 11:39:15.398875    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/kubeconfig: {Name:mk8a8f1f5779f30829ec51973ad05815f1640da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:39:15.399119    1708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 11:39:15.399128    1708 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 11:39:15.399154    1708 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 11:39:15.399207    1708 addons.go:69] Setting yakd=true in profile "addons-700000"
	I0919 11:39:15.399210    1708 addons.go:69] Setting ingress=true in profile "addons-700000"
	I0919 11:39:15.399216    1708 addons.go:234] Setting addon yakd=true in "addons-700000"
	I0919 11:39:15.399218    1708 config.go:182] Loaded profile config "addons-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 11:39:15.399220    1708 addons.go:234] Setting addon ingress=true in "addons-700000"
	I0919 11:39:15.399227    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:15.399237    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:15.399243    1708 addons.go:69] Setting cloud-spanner=true in profile "addons-700000"
	I0919 11:39:15.399248    1708 addons.go:234] Setting addon cloud-spanner=true in "addons-700000"
	I0919 11:39:15.399252    1708 addons.go:69] Setting registry=true in profile "addons-700000"
	I0919 11:39:15.399258    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:15.399262    1708 addons.go:234] Setting addon registry=true in "addons-700000"
	I0919 11:39:15.399259    1708 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-700000"
	I0919 11:39:15.399282    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:15.399259    1708 addons.go:69] Setting default-storageclass=true in profile "addons-700000"
	I0919 11:39:15.399301    1708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-700000"
	I0919 11:39:15.399367    1708 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-700000"
	I0919 11:39:15.399358    1708 addons.go:69] Setting inspektor-gadget=true in profile "addons-700000"
	I0919 11:39:15.399396    1708 addons.go:234] Setting addon inspektor-gadget=true in "addons-700000"
	I0919 11:39:15.399403    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:15.399422    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:15.399521    1708 addons.go:69] Setting gcp-auth=true in profile "addons-700000"
	I0919 11:39:15.399529    1708 mustload.go:65] Loading cluster: addons-700000
	I0919 11:39:15.399530    1708 retry.go:31] will retry after 1.478006723s: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.399536    1708 retry.go:31] will retry after 867.160232ms: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.399539    1708 addons.go:69] Setting storage-provisioner=true in profile "addons-700000"
	I0919 11:39:15.399546    1708 addons.go:69] Setting volcano=true in profile "addons-700000"
	I0919 11:39:15.399551    1708 addons.go:69] Setting metrics-server=true in profile "addons-700000"
	I0919 11:39:15.399552    1708 addons.go:234] Setting addon volcano=true in "addons-700000"
	I0919 11:39:15.399555    1708 addons.go:234] Setting addon metrics-server=true in "addons-700000"
	I0919 11:39:15.399562    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:15.399574    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:15.399612    1708 config.go:182] Loaded profile config "addons-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 11:39:15.399762    1708 retry.go:31] will retry after 868.434985ms: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.399769    1708 addons.go:69] Setting ingress-dns=true in profile "addons-700000"
	I0919 11:39:15.399772    1708 addons.go:234] Setting addon ingress-dns=true in "addons-700000"
	I0919 11:39:15.399780    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:15.399776    1708 retry.go:31] will retry after 1.408326093s: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.399782    1708 retry.go:31] will retry after 716.351774ms: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.399800    1708 addons.go:69] Setting volumesnapshots=true in profile "addons-700000"
	I0919 11:39:15.399803    1708 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-700000"
	I0919 11:39:15.399810    1708 addons.go:234] Setting addon volumesnapshots=true in "addons-700000"
	I0919 11:39:15.399822    1708 retry.go:31] will retry after 1.318577078s: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.399542    1708 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-700000"
	I0919 11:39:15.399819    1708 retry.go:31] will retry after 1.373867807s: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.399836    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:15.399547    1708 addons.go:234] Setting addon storage-provisioner=true in "addons-700000"
	I0919 11:39:15.399863    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:15.399830    1708 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-700000"
	I0919 11:39:15.399946    1708 retry.go:31] will retry after 1.129986645s: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.399843    1708 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-700000"
	I0919 11:39:15.400010    1708 retry.go:31] will retry after 735.000025ms: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.400030    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:15.400136    1708 retry.go:31] will retry after 1.050328688s: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.400164    1708 retry.go:31] will retry after 1.409526773s: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.400201    1708 retry.go:31] will retry after 1.112989305s: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.400341    1708 retry.go:31] will retry after 1.150073275s: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:15.403458    1708 out.go:177] * Verifying Kubernetes components...
	I0919 11:39:15.411363    1708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 11:39:15.415451    1708 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 11:39:15.415505    1708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 11:39:15.423426    1708 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 11:39:15.423436    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 11:39:15.423446    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:15.427383    1708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0919 11:39:15.431308    1708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 11:39:15.435525    1708 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 11:39:15.435532    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 11:39:15.435540    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:15.444536    1708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 11:39:15.530530    1708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 11:39:15.546570    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 11:39:15.639292    1708 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0919 11:39:15.640711    1708 node_ready.go:35] waiting up to 6m0s for node "addons-700000" to be "Ready" ...
	I0919 11:39:15.641887    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 11:39:15.658843    1708 node_ready.go:49] node "addons-700000" has status "Ready":"True"
	I0919 11:39:15.658861    1708 node_ready.go:38] duration metric: took 18.129834ms for node "addons-700000" to be "Ready" ...
	I0919 11:39:15.658865    1708 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 11:39:15.671199    1708 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-700000" in "kube-system" namespace to be "Ready" ...
	I0919 11:39:16.042459    1708 addons.go:475] Verifying addon ingress=true in "addons-700000"
	I0919 11:39:16.046830    1708 out.go:177] * Verifying ingress addon...
	I0919 11:39:16.055112    1708 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 11:39:16.056317    1708 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 11:39:16.122775    1708 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 11:39:16.132791    1708 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 11:39:16.139769    1708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 11:39:16.146743    1708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 11:39:16.150586    1708 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-700000"
	I0919 11:39:16.150604    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:16.150981    1708 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-700000" context rescaled to 1 replicas
	I0919 11:39:16.153770    1708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 11:39:16.156786    1708 out.go:177]   - Using image docker.io/busybox:stable
	I0919 11:39:16.163800    1708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 11:39:16.166810    1708 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 11:39:16.170808    1708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 11:39:16.173815    1708 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 11:39:16.173822    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 11:39:16.173830    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:16.179793    1708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 11:39:16.182668    1708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 11:39:16.182675    1708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 11:39:16.182683    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:16.209134    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 11:39:16.225527    1708 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 11:39:16.225542    1708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 11:39:16.258605    1708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 11:39:16.258616    1708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 11:39:16.268685    1708 retry.go:31] will retry after 777.07214ms: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/monitor: connect: connection refused
	I0919 11:39:16.269096    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:16.293360    1708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 11:39:16.293380    1708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 11:39:16.306498    1708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 11:39:16.306513    1708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 11:39:16.340452    1708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 11:39:16.340464    1708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 11:39:16.356718    1708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 11:39:16.356734    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 11:39:16.380832    1708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 11:39:16.380847    1708 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 11:39:16.406837    1708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 11:39:16.406847    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 11:39:16.417573    1708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 11:39:16.417583    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 11:39:16.432598    1708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 11:39:16.432608    1708 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 11:39:16.447826    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 11:39:16.455185    1708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 11:39:16.459241    1708 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 11:39:16.459250    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 11:39:16.459260    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:16.521133    1708 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0919 11:39:16.525233    1708 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 11:39:16.525247    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 11:39:16.525258    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:16.533150    1708 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 11:39:16.536157    1708 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 11:39:16.536170    1708 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 11:39:16.536183    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:16.567992    1708 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 11:39:16.571223    1708 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 11:39:16.571231    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 11:39:16.571240    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:16.573595    1708 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 11:39:16.573602    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:16.594477    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 11:39:16.615752    1708 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 11:39:16.615769    1708 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 11:39:16.634627    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 11:39:16.676823    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 11:39:16.736046    1708 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 11:39:16.736062    1708 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 11:39:16.782247    1708 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 11:39:16.783603    1708 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0919 11:39:16.787120    1708 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 11:39:16.787136    1708 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 11:39:16.787148    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:16.795164    1708 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0919 11:39:16.799175    1708 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0919 11:39:16.799932    1708 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 11:39:16.799939    1708 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 11:39:16.805588    1708 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0919 11:39:16.805599    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0919 11:39:16.805614    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:16.809419    1708 addons.go:234] Setting addon default-storageclass=true in "addons-700000"
	I0919 11:39:16.809438    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:16.810091    1708 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 11:39:16.810097    1708 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 11:39:16.810104    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:16.816172    1708 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 11:39:16.820150    1708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 11:39:16.820162    1708 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 11:39:16.820174    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:16.820487    1708 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 11:39:16.820493    1708 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 11:39:16.883084    1708 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 11:39:16.886154    1708 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 11:39:16.886167    1708 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 11:39:16.886178    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:16.886526    1708 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 11:39:16.886532    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 11:39:16.888861    1708 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 11:39:16.888869    1708 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 11:39:16.971926    1708 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 11:39:16.971941    1708 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 11:39:17.013014    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0919 11:39:17.017576    1708 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 11:39:17.017585    1708 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 11:39:17.022703    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 11:39:17.052184    1708 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 11:39:17.056083    1708 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 11:39:17.060245    1708 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 11:39:17.060257    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 11:39:17.060268    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:17.066930    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:17.124568    1708 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 11:39:17.124581    1708 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 11:39:17.148855    1708 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 11:39:17.148868    1708 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 11:39:17.151579    1708 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 11:39:17.151588    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 11:39:17.165240    1708 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 11:39:17.165253    1708 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 11:39:17.176233    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 11:39:17.178402    1708 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 11:39:17.178411    1708 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 11:39:17.192652    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 11:39:17.204539    1708 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 11:39:17.204553    1708 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 11:39:17.205242    1708 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 11:39:17.205248    1708 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 11:39:17.214764    1708 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 11:39:17.214779    1708 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 11:39:17.259208    1708 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 11:39:17.259222    1708 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 11:39:17.264048    1708 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 11:39:17.264056    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 11:39:17.296307    1708 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 11:39:17.296320    1708 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 11:39:17.312571    1708 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 11:39:17.312582    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 11:39:17.316744    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 11:39:17.378825    1708 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 11:39:17.378836    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 11:39:17.379052    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 11:39:17.499859    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 11:39:17.560235    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:17.674930    1708 pod_ready.go:103] pod "etcd-addons-700000" in "kube-system" namespace has status "Ready":"False"
	I0919 11:39:18.084280    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:18.642431    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:19.059265    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:19.191709    1708 pod_ready.go:93] pod "etcd-addons-700000" in "kube-system" namespace has status "Ready":"True"
	I0919 11:39:19.191722    1708 pod_ready.go:82] duration metric: took 3.520593334s for pod "etcd-addons-700000" in "kube-system" namespace to be "Ready" ...
	I0919 11:39:19.191727    1708 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-700000" in "kube-system" namespace to be "Ready" ...
	I0919 11:39:19.329586    1708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.735154542s)
	I0919 11:39:19.329607    1708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.881828667s)
	I0919 11:39:19.329615    1708 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-700000"
	I0919 11:39:19.329670    1708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.695097833s)
	I0919 11:39:19.329683    1708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.652914375s)
	I0919 11:39:19.340993    1708 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 11:39:19.347418    1708 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 11:39:19.370635    1708 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 11:39:19.370645    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:19.560133    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:19.884098    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:20.085916    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:20.255511    1708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.232864209s)
	I0919 11:39:20.255584    1708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.242633541s)
	I0919 11:39:20.255637    1708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.079464334s)
	I0919 11:39:20.255646    1708 addons.go:475] Verifying addon metrics-server=true in "addons-700000"
	I0919 11:39:20.255706    1708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.063115s)
	I0919 11:39:20.255727    1708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.939043708s)
	I0919 11:39:20.255734    1708 addons.go:475] Verifying addon registry=true in "addons-700000"
	I0919 11:39:20.255776    1708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.876783125s)
	I0919 11:39:20.255815    1708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.756010125s)
	W0919 11:39:20.255830    1708 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 11:39:20.255842    1708 retry.go:31] will retry after 167.294834ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 11:39:20.264983    1708 out.go:177] * Verifying registry addon...
	I0919 11:39:20.268918    1708 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-700000 service yakd-dashboard -n yakd-dashboard
	
	I0919 11:39:20.275366    1708 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 11:39:20.342949    1708 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 11:39:20.342961    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:20.425269    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 11:39:20.465788    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:20.559103    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:20.779067    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:20.883186    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:21.059600    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:21.197340    1708 pod_ready.go:103] pod "kube-apiserver-addons-700000" in "kube-system" namespace has status "Ready":"False"
	I0919 11:39:21.277686    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:21.353468    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:21.559186    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:21.779806    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:21.851462    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:22.059118    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:22.277307    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:22.351547    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:22.559944    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:22.779604    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:22.850610    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:22.917565    1708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.492334583s)
	I0919 11:39:23.061640    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:23.232929    1708 pod_ready.go:103] pod "kube-apiserver-addons-700000" in "kube-system" namespace has status "Ready":"False"
	I0919 11:39:23.277179    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:23.351507    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:23.558989    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:23.779226    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:23.857768    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:24.059174    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:24.072185    1708 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 11:39:24.072202    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:24.108732    1708 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 11:39:24.114467    1708 addons.go:234] Setting addon gcp-auth=true in "addons-700000"
	I0919 11:39:24.114487    1708 host.go:66] Checking if "addons-700000" exists ...
	I0919 11:39:24.115260    1708 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 11:39:24.115267    1708 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/addons-700000/id_rsa Username:docker}
	I0919 11:39:24.148489    1708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 11:39:24.152598    1708 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 11:39:24.157432    1708 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 11:39:24.157441    1708 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 11:39:24.163426    1708 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 11:39:24.163432    1708 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 11:39:24.169105    1708 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 11:39:24.169111    1708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 11:39:24.174985    1708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 11:39:24.279120    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:24.355547    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:24.387894    1708 addons.go:475] Verifying addon gcp-auth=true in "addons-700000"
	I0919 11:39:24.393172    1708 out.go:177] * Verifying gcp-auth addon...
	I0919 11:39:24.405485    1708 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 11:39:24.455644    1708 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 11:39:24.559135    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:24.777668    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:24.851766    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:25.059428    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:25.279475    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:25.351745    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:25.558672    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:25.694939    1708 pod_ready.go:103] pod "kube-apiserver-addons-700000" in "kube-system" namespace has status "Ready":"False"
	I0919 11:39:25.779281    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:25.882159    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:26.059594    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:26.279183    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:26.351677    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:26.558564    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:26.700563    1708 pod_ready.go:93] pod "kube-apiserver-addons-700000" in "kube-system" namespace has status "Ready":"True"
	I0919 11:39:26.700572    1708 pod_ready.go:82] duration metric: took 7.509021833s for pod "kube-apiserver-addons-700000" in "kube-system" namespace to be "Ready" ...
	I0919 11:39:26.700577    1708 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-700000" in "kube-system" namespace to be "Ready" ...
	I0919 11:39:26.703308    1708 pod_ready.go:93] pod "kube-controller-manager-addons-700000" in "kube-system" namespace has status "Ready":"True"
	I0919 11:39:26.703314    1708 pod_ready.go:82] duration metric: took 2.73325ms for pod "kube-controller-manager-addons-700000" in "kube-system" namespace to be "Ready" ...
	I0919 11:39:26.703318    1708 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-700000" in "kube-system" namespace to be "Ready" ...
	I0919 11:39:26.707836    1708 pod_ready.go:93] pod "kube-scheduler-addons-700000" in "kube-system" namespace has status "Ready":"True"
	I0919 11:39:26.707845    1708 pod_ready.go:82] duration metric: took 4.523875ms for pod "kube-scheduler-addons-700000" in "kube-system" namespace to be "Ready" ...
	I0919 11:39:26.707848    1708 pod_ready.go:39] duration metric: took 11.049243459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 11:39:26.707856    1708 api_server.go:52] waiting for apiserver process to appear ...
	I0919 11:39:26.707925    1708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 11:39:26.731425    1708 api_server.go:72] duration metric: took 11.332553667s to wait for apiserver process to appear ...
	I0919 11:39:26.731436    1708 api_server.go:88] waiting for apiserver healthz status ...
	I0919 11:39:26.731446    1708 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0919 11:39:26.734838    1708 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0919 11:39:26.735665    1708 api_server.go:141] control plane version: v1.31.1
	I0919 11:39:26.735672    1708 api_server.go:131] duration metric: took 4.233375ms to wait for apiserver health ...
	I0919 11:39:26.735682    1708 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 11:39:26.741616    1708 system_pods.go:59] 17 kube-system pods found
	I0919 11:39:26.741630    1708 system_pods.go:61] "coredns-7c65d6cfc9-x6mfg" [54189b54-23c7-43ed-8dc5-d957ea71f1d2] Running
	I0919 11:39:26.741634    1708 system_pods.go:61] "csi-hostpath-attacher-0" [9050f406-f036-4ea0-bfa4-1631de0f2bf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 11:39:26.741645    1708 system_pods.go:61] "csi-hostpath-resizer-0" [0d7c771c-a753-40a6-be26-fe810d68d25c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 11:39:26.741650    1708 system_pods.go:61] "csi-hostpathplugin-dgt85" [e46d4df9-c14d-4544-a7bf-7e09fc369933] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 11:39:26.741653    1708 system_pods.go:61] "etcd-addons-700000" [d7b54932-949e-4a6d-a56f-4c10ffd9565f] Running
	I0919 11:39:26.741657    1708 system_pods.go:61] "kube-apiserver-addons-700000" [be345665-2a2c-4bd4-a7d3-c732cd600477] Running
	I0919 11:39:26.741659    1708 system_pods.go:61] "kube-controller-manager-addons-700000" [6b751c76-fdd6-4941-ac7c-cdec8949180e] Running
	I0919 11:39:26.741664    1708 system_pods.go:61] "kube-ingress-dns-minikube" [c96cd254-c7d5-4b88-864a-295418f14942] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 11:39:26.741666    1708 system_pods.go:61] "kube-proxy-dv6l6" [fdf30315-5926-4c9d-b0c9-77bec6037465] Running
	I0919 11:39:26.741670    1708 system_pods.go:61] "kube-scheduler-addons-700000" [186a3155-7f94-4887-ae94-993e96f1546e] Running
	I0919 11:39:26.741673    1708 system_pods.go:61] "metrics-server-84c5f94fbc-kwrlr" [ed9b97be-9528-4e60-ba30-1c4859ac01d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 11:39:26.741676    1708 system_pods.go:61] "nvidia-device-plugin-daemonset-55z9t" [89442d59-5a01-4b79-b0f5-c99cea04d337] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0919 11:39:26.741681    1708 system_pods.go:61] "registry-66c9cd494c-scwnh" [8bdf6123-9742-4fb5-a3a6-2eb970734d28] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 11:39:26.741683    1708 system_pods.go:61] "registry-proxy-bdzcj" [08fd319b-dcb5-4641-aaf4-0cde96c2f1c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 11:39:26.741689    1708 system_pods.go:61] "snapshot-controller-56fcc65765-6526r" [784400ed-7fd4-4728-b981-3a0b57e9b14c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 11:39:26.741694    1708 system_pods.go:61] "snapshot-controller-56fcc65765-6h9w4" [225656a9-da73-4a26-ba9d-602785fc994b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 11:39:26.741696    1708 system_pods.go:61] "storage-provisioner" [8f20dab2-df44-485b-a4b8-2516e92df10f] Running
	I0919 11:39:26.741699    1708 system_pods.go:74] duration metric: took 6.013459ms to wait for pod list to return data ...
	I0919 11:39:26.741703    1708 default_sa.go:34] waiting for default service account to be created ...
	I0919 11:39:26.745070    1708 default_sa.go:45] found service account: "default"
	I0919 11:39:26.745079    1708 default_sa.go:55] duration metric: took 3.372875ms for default service account to be created ...
	I0919 11:39:26.745082    1708 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 11:39:26.751434    1708 system_pods.go:86] 17 kube-system pods found
	I0919 11:39:26.751443    1708 system_pods.go:89] "coredns-7c65d6cfc9-x6mfg" [54189b54-23c7-43ed-8dc5-d957ea71f1d2] Running
	I0919 11:39:26.751448    1708 system_pods.go:89] "csi-hostpath-attacher-0" [9050f406-f036-4ea0-bfa4-1631de0f2bf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 11:39:26.751452    1708 system_pods.go:89] "csi-hostpath-resizer-0" [0d7c771c-a753-40a6-be26-fe810d68d25c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 11:39:26.751455    1708 system_pods.go:89] "csi-hostpathplugin-dgt85" [e46d4df9-c14d-4544-a7bf-7e09fc369933] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 11:39:26.751457    1708 system_pods.go:89] "etcd-addons-700000" [d7b54932-949e-4a6d-a56f-4c10ffd9565f] Running
	I0919 11:39:26.751459    1708 system_pods.go:89] "kube-apiserver-addons-700000" [be345665-2a2c-4bd4-a7d3-c732cd600477] Running
	I0919 11:39:26.751475    1708 system_pods.go:89] "kube-controller-manager-addons-700000" [6b751c76-fdd6-4941-ac7c-cdec8949180e] Running
	I0919 11:39:26.751485    1708 system_pods.go:89] "kube-ingress-dns-minikube" [c96cd254-c7d5-4b88-864a-295418f14942] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 11:39:26.751487    1708 system_pods.go:89] "kube-proxy-dv6l6" [fdf30315-5926-4c9d-b0c9-77bec6037465] Running
	I0919 11:39:26.751490    1708 system_pods.go:89] "kube-scheduler-addons-700000" [186a3155-7f94-4887-ae94-993e96f1546e] Running
	I0919 11:39:26.751493    1708 system_pods.go:89] "metrics-server-84c5f94fbc-kwrlr" [ed9b97be-9528-4e60-ba30-1c4859ac01d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 11:39:26.751496    1708 system_pods.go:89] "nvidia-device-plugin-daemonset-55z9t" [89442d59-5a01-4b79-b0f5-c99cea04d337] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0919 11:39:26.751500    1708 system_pods.go:89] "registry-66c9cd494c-scwnh" [8bdf6123-9742-4fb5-a3a6-2eb970734d28] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 11:39:26.751503    1708 system_pods.go:89] "registry-proxy-bdzcj" [08fd319b-dcb5-4641-aaf4-0cde96c2f1c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 11:39:26.751505    1708 system_pods.go:89] "snapshot-controller-56fcc65765-6526r" [784400ed-7fd4-4728-b981-3a0b57e9b14c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 11:39:26.751509    1708 system_pods.go:89] "snapshot-controller-56fcc65765-6h9w4" [225656a9-da73-4a26-ba9d-602785fc994b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 11:39:26.751511    1708 system_pods.go:89] "storage-provisioner" [8f20dab2-df44-485b-a4b8-2516e92df10f] Running
	I0919 11:39:26.751515    1708 system_pods.go:126] duration metric: took 6.429958ms to wait for k8s-apps to be running ...
	I0919 11:39:26.751519    1708 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 11:39:26.751580    1708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 11:39:26.769576    1708 system_svc.go:56] duration metric: took 18.051625ms WaitForService to wait for kubelet
	I0919 11:39:26.769589    1708 kubeadm.go:582] duration metric: took 11.370723292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 11:39:26.769601    1708 node_conditions.go:102] verifying NodePressure condition ...
	I0919 11:39:26.771981    1708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 11:39:26.771991    1708 node_conditions.go:123] node cpu capacity is 2
	I0919 11:39:26.771997    1708 node_conditions.go:105] duration metric: took 2.393916ms to run NodePressure ...
	I0919 11:39:26.772003    1708 start.go:241] waiting for startup goroutines ...
	I0919 11:39:26.778026    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:26.852643    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:27.060442    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:27.279741    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:27.354732    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:27.557563    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:27.779203    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:27.851697    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:28.058980    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:28.279029    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:28.351719    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:28.559320    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:28.778701    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:28.851523    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:29.059052    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:29.279052    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:29.351513    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:29.558818    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:29.778881    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:29.851383    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:30.058860    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:30.279088    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:30.351632    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:30.558524    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:30.778798    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:30.851502    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:31.057110    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:31.278681    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:31.351532    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:31.559640    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:31.778938    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:31.852530    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:32.068312    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:32.277782    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:32.351459    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:32.558629    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:32.778889    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:32.849709    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:33.058862    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:33.279220    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:33.349705    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:33.558816    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:33.778995    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:33.851520    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:34.058786    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:34.278440    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:34.351481    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:34.558921    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:34.778867    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:34.851396    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:35.058693    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:35.278783    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:35.351537    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:35.558876    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:35.778930    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:35.851490    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:36.058796    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:36.278869    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:36.351463    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:36.571985    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:36.777992    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:36.851471    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:37.064681    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:37.280093    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:37.353405    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:37.562790    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:37.779068    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:37.850760    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:38.058579    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:38.278556    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:38.351062    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:38.558823    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:38.779042    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:38.854513    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:39.059345    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:39.278943    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:39.380240    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:39.558676    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:39.778729    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:39.851330    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:40.058720    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:40.278719    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:40.380519    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:40.559756    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:40.778809    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:40.850991    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:41.058858    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:41.279202    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:41.355978    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:41.558722    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:41.778663    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:41.851200    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:42.058236    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:42.278472    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:42.351206    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:42.560473    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:42.778704    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:42.851301    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:43.060434    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:43.277750    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:43.352272    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:43.566777    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:43.778906    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:43.851149    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:44.086423    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:44.278684    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:44.351214    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:44.558570    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:44.778680    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:44.851247    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:45.091656    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:45.277648    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 11:39:45.378971    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:45.559290    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:45.778320    1708 kapi.go:107] duration metric: took 25.503565666s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 11:39:45.850790    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:46.059119    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:46.353388    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:46.562476    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:46.851100    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:47.058434    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:47.351112    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:47.558944    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:47.851262    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:48.058757    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:48.351349    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:48.558418    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:48.850955    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:49.058743    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:49.351500    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:49.559553    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:49.849552    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:50.058573    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:50.350908    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:50.558284    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:50.851321    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:51.058407    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:51.377089    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:51.558613    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:51.849822    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:52.069402    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:52.351242    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:52.558563    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:52.850917    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:53.058227    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:53.350984    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:53.558712    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:53.851224    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:54.059097    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:54.352300    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:54.558295    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:54.851004    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:55.058422    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:55.351255    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:55.558466    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:55.850743    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:56.060082    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:56.353832    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:56.558914    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:56.850908    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:57.058793    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:57.351713    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:57.558426    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:57.850694    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:58.058185    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:58.349865    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:58.558415    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:58.850748    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:59.058242    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:59.350805    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:39:59.558235    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:39:59.850698    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:00.060202    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:00.352929    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:00.558804    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:00.850548    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:01.058092    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:01.350876    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:01.558052    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:01.851026    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:02.245077    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:02.350765    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:02.558511    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:02.853912    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:03.058343    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:03.350845    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:03.558441    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:03.850602    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:04.057232    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:04.350506    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:04.558038    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:04.850595    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:05.058824    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:05.353160    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:05.557978    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:05.851537    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:06.057906    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:06.350585    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:06.556629    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:06.852009    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:07.065030    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:07.357911    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:07.559033    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:07.850506    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:08.057815    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:08.350643    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:08.558012    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:08.851142    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:09.059548    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:09.350734    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:09.558096    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:09.850722    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:10.056652    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:10.351846    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:10.557774    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:10.850323    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:11.057809    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:11.350568    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:11.558375    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:11.850292    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:12.058105    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:12.350978    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:12.557738    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:12.850566    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:13.055920    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:13.350401    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:13.557700    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:13.892277    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:14.057908    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:14.350633    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:14.557861    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:14.850665    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:15.057756    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:15.350497    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:15.556327    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:15.850416    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:16.058884    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:16.352291    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:16.557730    1708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 11:40:16.850315    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:17.057701    1708 kapi.go:107] duration metric: took 1m1.004063709s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 11:40:17.351031    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:17.850317    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:18.351353    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:18.848579    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:19.350890    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:19.850051    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:20.350589    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:20.850395    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:21.350666    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:21.850538    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:22.350518    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:22.850339    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:23.349916    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:23.850020    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:24.350258    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:24.850441    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:25.350471    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 11:40:25.851483    1708 kapi.go:107] duration metric: took 1m6.505661375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 11:40:46.907429    1708 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 11:40:46.907442    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:47.409509    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:47.909127    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:48.409586    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:48.907548    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:49.412230    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:49.908400    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:50.408399    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:50.908547    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:51.407924    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:51.909691    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:52.412874    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:52.908601    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:53.412146    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:53.908455    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:54.408009    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:54.908471    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:55.411993    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:55.908191    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:56.407951    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:56.907322    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:57.409629    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:57.908090    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:58.409535    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:58.907750    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:59.409215    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:40:59.911167    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:00.418143    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:00.908511    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:01.407107    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:01.911410    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:02.413006    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:02.908809    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:03.413769    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:03.907631    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:04.411724    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:04.907775    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:05.411572    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:05.905836    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:06.406458    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:06.906448    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:07.407264    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:07.906688    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:08.407667    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:08.907940    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:09.411816    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:09.909601    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:10.413086    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:10.906791    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:11.407070    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:11.907607    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:12.411325    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:12.909011    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:13.413134    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:13.910411    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:14.409680    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:14.907908    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:15.412740    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:15.906419    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:16.413265    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:16.907581    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:17.413582    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:17.908541    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:18.409185    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:18.907354    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:19.409896    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:19.908309    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:20.411814    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:20.908089    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:21.406291    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:21.907305    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:22.411045    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:22.908171    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:23.412566    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:23.907823    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:24.407608    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:24.906667    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:25.412893    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:25.908008    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:26.413839    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:26.908064    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:27.407869    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:27.906206    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:28.408043    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:28.906748    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:29.407229    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:29.906400    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:30.407320    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:30.907765    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:31.406540    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:31.907990    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:32.406671    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:32.906111    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:33.407465    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:33.907412    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:34.407779    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:34.907988    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:35.412630    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:35.906728    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:36.413524    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:36.907362    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:37.412330    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:37.906805    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:38.406418    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:38.907480    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:39.408326    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:39.908924    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:40.408063    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:40.904476    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:41.406008    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:41.907407    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:42.408539    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:42.909071    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:43.408126    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:43.906794    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:44.408186    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:44.907514    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:45.406958    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:45.908029    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:46.406952    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:46.906769    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:47.407897    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:47.908849    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:48.408221    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:48.906468    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:49.405231    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:49.907668    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:50.407064    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:50.905798    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:51.413435    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:51.905568    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:52.405989    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:52.903861    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:53.405931    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:53.905451    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:54.435630    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:54.905757    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:55.406667    1708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 11:41:55.905543    1708 kapi.go:107] duration metric: took 2m31.503692667s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 11:41:55.910690    1708 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-700000 cluster.
	I0919 11:41:55.914746    1708 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 11:41:55.918722    1708 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 11:41:55.922758    1708 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner-rancher, storage-provisioner, ingress-dns, nvidia-device-plugin, volcano, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0919 11:41:55.926711    1708 addons.go:510] duration metric: took 2m40.531408375s for enable addons: enabled=[cloud-spanner storage-provisioner-rancher storage-provisioner ingress-dns nvidia-device-plugin volcano metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0919 11:41:55.926725    1708 start.go:246] waiting for cluster config update ...
	I0919 11:41:55.926735    1708 start.go:255] writing updated cluster config ...
	I0919 11:41:55.927501    1708 ssh_runner.go:195] Run: rm -f paused
	I0919 11:41:56.086348    1708 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0919 11:41:56.090789    1708 out.go:201] 
	W0919 11:41:56.093775    1708 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0919 11:41:56.097700    1708 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0919 11:41:56.111647    1708 out.go:177] * Done! kubectl is now configured to use "addons-700000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 19 18:51:40 addons-700000 dockerd[1283]: time="2024-09-19T18:51:40.378296954Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 18:51:40 addons-700000 dockerd[1277]: time="2024-09-19T18:51:40.450699409Z" level=info msg="ignoring event" container=aedb0a1136f930e424bf207f8dee3f6f0a90c334922e8e1595e14d4bb3c83409 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:51:40 addons-700000 dockerd[1283]: time="2024-09-19T18:51:40.451051659Z" level=info msg="shim disconnected" id=aedb0a1136f930e424bf207f8dee3f6f0a90c334922e8e1595e14d4bb3c83409 namespace=moby
	Sep 19 18:51:40 addons-700000 dockerd[1283]: time="2024-09-19T18:51:40.451081701Z" level=warning msg="cleaning up after shim disconnected" id=aedb0a1136f930e424bf207f8dee3f6f0a90c334922e8e1595e14d4bb3c83409 namespace=moby
	Sep 19 18:51:40 addons-700000 dockerd[1283]: time="2024-09-19T18:51:40.451085868Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 18:51:45 addons-700000 dockerd[1277]: time="2024-09-19T18:51:45.994695263Z" level=info msg="ignoring event" container=f55520d94338d255d8fe53fb8265c4c3a470f1902d8237d7bd112810d4cd44f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:51:45 addons-700000 dockerd[1283]: time="2024-09-19T18:51:45.995316805Z" level=info msg="shim disconnected" id=f55520d94338d255d8fe53fb8265c4c3a470f1902d8237d7bd112810d4cd44f9 namespace=moby
	Sep 19 18:51:45 addons-700000 dockerd[1283]: time="2024-09-19T18:51:45.995422096Z" level=warning msg="cleaning up after shim disconnected" id=f55520d94338d255d8fe53fb8265c4c3a470f1902d8237d7bd112810d4cd44f9 namespace=moby
	Sep 19 18:51:45 addons-700000 dockerd[1283]: time="2024-09-19T18:51:45.995453138Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1277]: time="2024-09-19T18:51:46.153761967Z" level=info msg="ignoring event" container=f5632fe90208c71a80b371cec42965c6e6fd5c2bb9e227acf5ce49a29f89eba0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:51:46 addons-700000 dockerd[1283]: time="2024-09-19T18:51:46.154451884Z" level=info msg="shim disconnected" id=f5632fe90208c71a80b371cec42965c6e6fd5c2bb9e227acf5ce49a29f89eba0 namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1283]: time="2024-09-19T18:51:46.154506009Z" level=warning msg="cleaning up after shim disconnected" id=f5632fe90208c71a80b371cec42965c6e6fd5c2bb9e227acf5ce49a29f89eba0 namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1283]: time="2024-09-19T18:51:46.154524259Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1283]: time="2024-09-19T18:51:46.183254943Z" level=info msg="shim disconnected" id=c6d8cf4eca101f17d735d6da9e71bf580bdc503f95b76e29cbbb69821b1fb33c namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1283]: time="2024-09-19T18:51:46.183294026Z" level=warning msg="cleaning up after shim disconnected" id=c6d8cf4eca101f17d735d6da9e71bf580bdc503f95b76e29cbbb69821b1fb33c namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1283]: time="2024-09-19T18:51:46.183298151Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1277]: time="2024-09-19T18:51:46.184240277Z" level=info msg="ignoring event" container=c6d8cf4eca101f17d735d6da9e71bf580bdc503f95b76e29cbbb69821b1fb33c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:51:46 addons-700000 dockerd[1277]: time="2024-09-19T18:51:46.275601998Z" level=info msg="ignoring event" container=ded7a49eac6345734c050f68439d90b8d48c6d26d2b3f24aedd2821aee123101 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:51:46 addons-700000 dockerd[1283]: time="2024-09-19T18:51:46.275513081Z" level=info msg="shim disconnected" id=ded7a49eac6345734c050f68439d90b8d48c6d26d2b3f24aedd2821aee123101 namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1283]: time="2024-09-19T18:51:46.275829289Z" level=warning msg="cleaning up after shim disconnected" id=ded7a49eac6345734c050f68439d90b8d48c6d26d2b3f24aedd2821aee123101 namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1283]: time="2024-09-19T18:51:46.275875373Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1283]: time="2024-09-19T18:51:46.292749967Z" level=info msg="shim disconnected" id=06440ba2bda54b0674ef51cf43360ed34b22c7a226bacc0de09bff867b1604de namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1283]: time="2024-09-19T18:51:46.292793508Z" level=warning msg="cleaning up after shim disconnected" id=06440ba2bda54b0674ef51cf43360ed34b22c7a226bacc0de09bff867b1604de namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1283]: time="2024-09-19T18:51:46.292798717Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 18:51:46 addons-700000 dockerd[1277]: time="2024-09-19T18:51:46.292931883Z" level=info msg="ignoring event" container=06440ba2bda54b0674ef51cf43360ed34b22c7a226bacc0de09bff867b1604de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	2311cc1ea0fae       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            38 seconds ago      Exited              gadget                     7                   4972aa07736a1       gadget-b7hcq
	1446164f95021       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   449f29e06ae10       gcp-auth-89d5ffd79-cnmk6
	0e0a04bcb17c2       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   782dd5e7a24c9       ingress-nginx-controller-bc57996ff-96nxn
	c6d8cf4eca101       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy             0                   06440ba2bda54       registry-proxy-bdzcj
	954f21120fe16       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server             0                   9789b4e86b1d9       metrics-server-84c5f94fbc-kwrlr
	f5632fe90208c       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                   0                   ded7a49eac634       registry-66c9cd494c-scwnh
	e2b10a2637e99       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   2d243ce8dc301       kube-ingress-dns-minikube
	29c3da6b9cee4       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   162dfc1f43f68       nvidia-device-plugin-daemonset-55z9t
	1e724699c956e       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   b62f89da88a25       local-path-provisioner-86d989889c-rbd7c
	55f0daa8466e3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   12 minutes ago      Exited              patch                      0                   adf9f2c46bbf8       ingress-nginx-admission-patch-x6kgv
	1bfd4cad7ebf0       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   de65834b42bd9       cloud-spanner-emulator-769b77f747-rmrtd
	63b13f0768c85       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   12 minutes ago      Exited              create                     0                   726e3c2a0b9e1       ingress-nginx-admission-create-t98zj
	b5c52f7a93bf8       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   ae01894b27f47       storage-provisioner
	60b7cb58de12a       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   21fc7fdfacd8b       coredns-7c65d6cfc9-x6mfg
	dc2fb39516874       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   da94b6742d6b2       kube-proxy-dv6l6
	42524c02e8251       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   9904816bc6ba2       kube-controller-manager-addons-700000
	111b6f9f42f41       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   05c7c645f38a3       kube-apiserver-addons-700000
	1bab93fd14312       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   f5652f55db99c       kube-scheduler-addons-700000
	248c0b938ed7a       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   ffa1ccff6611d       etcd-addons-700000
	
	
	==> controller_ingress [0e0a04bcb17c] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0919 18:40:15.923823       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0919 18:40:15.923919       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0919 18:40:15.926817       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0919 18:40:15.986731       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0919 18:40:15.992335       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0919 18:40:15.996431       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0919 18:40:16.000609       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"261e027e-c44e-4cb0-816f-4a7872cd8e05", APIVersion:"v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0919 18:40:16.002676       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"a8a11394-863d-4b9a-b8ea-b6bf86abab2b", APIVersion:"v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0919 18:40:16.003484       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"d95c94b3-4db4-48ee-bd2c-2650c208f7d4", APIVersion:"v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0919 18:40:17.197965       7 nginx.go:317] "Starting NGINX process"
	I0919 18:40:17.198076       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0919 18:40:17.198130       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0919 18:40:17.198290       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0919 18:40:17.209080       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0919 18:40:17.210324       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-96nxn"
	I0919 18:40:17.215421       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-96nxn" node="addons-700000"
	I0919 18:40:17.227503       7 controller.go:213] "Backend successfully reloaded"
	I0919 18:40:17.227545       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0919 18:40:17.227714       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-96nxn", UID:"720f6d50-7c79-4be2-b340-622c5da1b7ec", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [60b7cb58de12] <==
	[INFO] 127.0.0.1:48878 - 23067 "HINFO IN 460431741811852412.2540490495097175360. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.009537s
	[INFO] 10.244.0.10:48438 - 19948 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000096385s
	[INFO] 10.244.0.10:48438 - 32746 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136446s
	[INFO] 10.244.0.10:33013 - 47992 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000028412s
	[INFO] 10.244.0.10:33013 - 48249 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050294s
	[INFO] 10.244.0.10:48605 - 60514 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028912s
	[INFO] 10.244.0.10:48605 - 48483 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064437s
	[INFO] 10.244.0.10:36624 - 18136 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000036108s
	[INFO] 10.244.0.10:36624 - 60889 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00003981s
	[INFO] 10.244.0.10:44315 - 63440 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000037106s
	[INFO] 10.244.0.10:44315 - 5841 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000059612s
	[INFO] 10.244.0.10:39422 - 30569 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000012896s
	[INFO] 10.244.0.10:39422 - 28266 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000014809s
	[INFO] 10.244.0.10:37151 - 9172 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000025168s
	[INFO] 10.244.0.10:37151 - 4821 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000017762s
	[INFO] 10.244.0.10:56704 - 13145 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000019843s
	[INFO] 10.244.0.10:56704 - 43099 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000016224s
	[INFO] 10.244.0.24:45631 - 11491 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.006565749s
	[INFO] 10.244.0.24:40014 - 9064 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0069275s
	[INFO] 10.244.0.24:54313 - 46165 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000191581s
	[INFO] 10.244.0.24:56241 - 44011 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000194747s
	[INFO] 10.244.0.24:36315 - 45682 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000034909s
	[INFO] 10.244.0.24:41867 - 48722 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000028618s
	[INFO] 10.244.0.24:50646 - 31214 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 192 0.004171005s
	[INFO] 10.244.0.24:33412 - 64861 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004700133s
	
	
	==> describe nodes <==
	Name:               addons-700000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-700000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=addons-700000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T11_39_11_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-700000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:39:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-700000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:51:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:47:50 +0000   Thu, 19 Sep 2024 18:39:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:47:50 +0000   Thu, 19 Sep 2024 18:39:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:47:50 +0000   Thu, 19 Sep 2024 18:39:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:47:50 +0000   Thu, 19 Sep 2024 18:39:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-700000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a99d8a855e247898b5a9d7b3952100e
	  System UUID:                3a99d8a855e247898b5a9d7b3952100e
	  Boot ID:                    6d8e7795-66be-4d6e-815b-b1ba87e20df7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-rmrtd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gadget                      gadget-b7hcq                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-cnmk6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-96nxn    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-x6mfg                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-700000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-700000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-700000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-dv6l6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-700000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-kwrlr             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-55z9t        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-rbd7c     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-700000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-700000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-700000 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-700000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-700000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-700000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-700000 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-700000 event: Registered Node addons-700000 in Controller
	
	
	==> dmesg <==
	[  +0.048783] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.139254] kauditd_printk_skb: 305 callbacks suppressed
	[  +5.131451] kauditd_printk_skb: 56 callbacks suppressed
	[ +19.519419] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.103963] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.002831] kauditd_printk_skb: 5 callbacks suppressed
	[Sep19 18:40] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.097634] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.447755] kauditd_printk_skb: 28 callbacks suppressed
	[ +11.730893] kauditd_printk_skb: 19 callbacks suppressed
	[Sep19 18:41] kauditd_printk_skb: 6 callbacks suppressed
	[ +21.850685] kauditd_printk_skb: 2 callbacks suppressed
	[ +22.805882] kauditd_printk_skb: 46 callbacks suppressed
	[Sep19 18:42] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.131130] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.367412] kauditd_printk_skb: 20 callbacks suppressed
	[ +20.107628] kauditd_printk_skb: 2 callbacks suppressed
	[Sep19 18:46] kauditd_printk_skb: 2 callbacks suppressed
	[Sep19 18:50] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.956976] kauditd_printk_skb: 19 callbacks suppressed
	[Sep19 18:51] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.798294] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.808129] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.211025] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.610324] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [248c0b938ed7] <==
	{"level":"info","ts":"2024-09-19T18:39:08.182399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-09-19T18:39:08.182403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-19T18:39:08.182409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-09-19T18:39:08.182413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-19T18:39:08.190336Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-700000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T18:39:08.190377Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T18:39:08.190577Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:39:08.190756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T18:39:08.190903Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T18:39:08.190928Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T18:39:08.191353Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:39:08.191996Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-19T18:39:08.192166Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:39:08.192231Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:39:08.192693Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:39:08.195477Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T18:39:08.211050Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:39:23.468319Z","caller":"traceutil/trace.go:171","msg":"trace[2052180850] transaction","detail":"{read_only:false; response_revision:916; number_of_response:1; }","duration":"154.050579ms","start":"2024-09-19T18:39:23.314260Z","end":"2024-09-19T18:39:23.468311Z","steps":["trace[2052180850] 'process raft request'  (duration: 153.868518ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:40:02.308005Z","caller":"traceutil/trace.go:171","msg":"trace[1929523304] linearizableReadLoop","detail":"{readStateIndex:1141; appliedIndex:1140; }","duration":"186.288289ms","start":"2024-09-19T18:40:02.121707Z","end":"2024-09-19T18:40:02.307995Z","steps":["trace[1929523304] 'read index received'  (duration: 146.979493ms)","trace[1929523304] 'applied index is now lower than readState.Index'  (duration: 39.308462ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T18:40:02.308057Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.338294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:40:02.308067Z","caller":"traceutil/trace.go:171","msg":"trace[659339337] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1115; }","duration":"186.359355ms","start":"2024-09-19T18:40:02.121705Z","end":"2024-09-19T18:40:02.308064Z","steps":["trace[659339337] 'agreement among raft nodes before linearized reading'  (duration: 186.324823ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:40:02.308147Z","caller":"traceutil/trace.go:171","msg":"trace[1544805241] transaction","detail":"{read_only:false; response_revision:1115; number_of_response:1; }","duration":"217.735943ms","start":"2024-09-19T18:40:02.090408Z","end":"2024-09-19T18:40:02.308144Z","steps":["trace[1544805241] 'process raft request'  (duration: 178.31054ms)","trace[1544805241] 'compare'  (duration: 39.165663ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:49:08.237163Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1865}
	{"level":"info","ts":"2024-09-19T18:49:08.340410Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1865,"took":"100.233346ms","hash":1747778835,"current-db-size-bytes":8757248,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4882432,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-19T18:49:08.340941Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1747778835,"revision":1865,"compact-revision":-1}
	
	
	==> gcp-auth [1446164f9502] <==
	2024/09/19 18:41:54 GCP Auth Webhook started!
	2024/09/19 18:42:11 Ready to marshal response ...
	2024/09/19 18:42:11 Ready to write response ...
	2024/09/19 18:42:12 Ready to marshal response ...
	2024/09/19 18:42:12 Ready to write response ...
	2024/09/19 18:42:34 Ready to marshal response ...
	2024/09/19 18:42:34 Ready to write response ...
	2024/09/19 18:42:34 Ready to marshal response ...
	2024/09/19 18:42:34 Ready to write response ...
	2024/09/19 18:42:34 Ready to marshal response ...
	2024/09/19 18:42:34 Ready to write response ...
	2024/09/19 18:50:45 Ready to marshal response ...
	2024/09/19 18:50:45 Ready to write response ...
	2024/09/19 18:50:48 Ready to marshal response ...
	2024/09/19 18:50:48 Ready to write response ...
	2024/09/19 18:51:18 Ready to marshal response ...
	2024/09/19 18:51:18 Ready to write response ...
	
	
	==> kernel <==
	 18:51:46 up 12 min,  0 users,  load average: 1.37, 1.00, 0.71
	Linux addons-700000 5.10.207 #1 SMP PREEMPT Mon Sep 16 12:01:57 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [111b6f9f42f4] <==
	I0919 18:42:25.197010       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0919 18:42:25.202810       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0919 18:42:25.231189       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0919 18:42:25.300430       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0919 18:42:25.845114       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0919 18:42:26.203002       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0919 18:42:26.226200       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0919 18:42:26.226299       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0919 18:42:26.286692       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0919 18:42:26.301220       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0919 18:42:26.393371       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0919 18:50:56.043156       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0919 18:51:34.976203       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:51:34.976218       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:51:34.986288       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:51:34.986301       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:51:34.992838       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:51:34.992855       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:51:35.002798       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:51:35.020933       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:51:35.060709       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:51:35.060722       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0919 18:51:35.986845       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0919 18:51:36.061080       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0919 18:51:36.068432       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [42524c02e825] <==
	E0919 18:51:37.033295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:51:37.215063       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:37.215178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:51:37.491853       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:37.491970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:51:38.665284       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:38.665348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:51:39.421898       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:39.422021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:51:39.634354       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:39.634479       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:51:39.985203       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:39.985328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:51:40.328639       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="3.208µs"
	W0919 18:51:42.358307       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:42.358699       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:51:43.868375       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:43.868485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:51:45.679488       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:45.679538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:51:45.963089       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0919 18:51:45.963105       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 18:51:46.128778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="1.791µs"
	I0919 18:51:46.395991       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0919 18:51:46.396004       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [dc2fb3951687] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 18:39:16.861930       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 18:39:16.868429       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0919 18:39:16.868464       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:39:16.933121       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 18:39:16.933177       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 18:39:16.933205       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:39:16.951790       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:39:16.952011       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:39:16.952022       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:39:16.952843       1 config.go:199] "Starting service config controller"
	I0919 18:39:16.952867       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:39:16.952933       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:39:16.952945       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:39:16.953301       1 config.go:328] "Starting node config controller"
	I0919 18:39:16.953310       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:39:17.053208       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 18:39:17.053228       1 shared_informer.go:320] Caches are synced for service config
	I0919 18:39:17.053352       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1bab93fd1431] <==
	E0919 18:39:08.793190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0919 18:39:08.793234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:08.793179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 18:39:08.793527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:08.793113       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 18:39:08.793569       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0919 18:39:09.606673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 18:39:09.607028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:09.629681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:39:09.629753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:09.674614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:39:09.674722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:09.698074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 18:39:09.698144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:09.798099       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:39:09.798308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:09.810147       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 18:39:09.810249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:09.852286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 18:39:09.852390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:09.856166       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 18:39:09.856187       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0919 18:39:09.862963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:39:09.863020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 18:39:11.790967       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 18:51:36 addons-700000 kubelet[2049]: I0919 18:51:36.937084    2049 scope.go:117] "RemoveContainer" containerID="2311cc1ea0fae7f5a3e3d44f28e8baaa6001138b97b1cd5cfd8d3ebc059464b3"
	Sep 19 18:51:36 addons-700000 kubelet[2049]: E0919 18:51:36.937822    2049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-b7hcq_gadget(69357b99-4c7a-4035-8f01-238311d7da0f)\"" pod="gadget/gadget-b7hcq" podUID="69357b99-4c7a-4035-8f01-238311d7da0f"
	Sep 19 18:51:36 addons-700000 kubelet[2049]: I0919 18:51:36.952101    2049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="225656a9-da73-4a26-ba9d-602785fc994b" path="/var/lib/kubelet/pods/225656a9-da73-4a26-ba9d-602785fc994b/volumes"
	Sep 19 18:51:36 addons-700000 kubelet[2049]: I0919 18:51:36.952575    2049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="784400ed-7fd4-4728-b981-3a0b57e9b14c" path="/var/lib/kubelet/pods/784400ed-7fd4-4728-b981-3a0b57e9b14c/volumes"
	Sep 19 18:51:38 addons-700000 kubelet[2049]: E0919 18:51:38.938104    2049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="170f7012-393e-4cb3-a1f7-6b06de675423"
	Sep 19 18:51:40 addons-700000 kubelet[2049]: I0919 18:51:40.585682    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4jjq\" (UniqueName: \"kubernetes.io/projected/16d19d45-47ec-4d58-b51b-8f83c981c5c1-kube-api-access-p4jjq\") pod \"16d19d45-47ec-4d58-b51b-8f83c981c5c1\" (UID: \"16d19d45-47ec-4d58-b51b-8f83c981c5c1\") "
	Sep 19 18:51:40 addons-700000 kubelet[2049]: I0919 18:51:40.589799    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16d19d45-47ec-4d58-b51b-8f83c981c5c1-kube-api-access-p4jjq" (OuterVolumeSpecName: "kube-api-access-p4jjq") pod "16d19d45-47ec-4d58-b51b-8f83c981c5c1" (UID: "16d19d45-47ec-4d58-b51b-8f83c981c5c1"). InnerVolumeSpecName "kube-api-access-p4jjq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:51:40 addons-700000 kubelet[2049]: I0919 18:51:40.686189    2049 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-p4jjq\" (UniqueName: \"kubernetes.io/projected/16d19d45-47ec-4d58-b51b-8f83c981c5c1-kube-api-access-p4jjq\") on node \"addons-700000\" DevicePath \"\""
	Sep 19 18:51:40 addons-700000 kubelet[2049]: I0919 18:51:40.721802    2049 scope.go:117] "RemoveContainer" containerID="4b1b85b8ff94f54dc6d79eade97b48b904c0a49841a2d03b20ee9fca7b52bdc9"
	Sep 19 18:51:40 addons-700000 kubelet[2049]: I0919 18:51:40.734923    2049 scope.go:117] "RemoveContainer" containerID="4b1b85b8ff94f54dc6d79eade97b48b904c0a49841a2d03b20ee9fca7b52bdc9"
	Sep 19 18:51:40 addons-700000 kubelet[2049]: E0919 18:51:40.735442    2049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4b1b85b8ff94f54dc6d79eade97b48b904c0a49841a2d03b20ee9fca7b52bdc9" containerID="4b1b85b8ff94f54dc6d79eade97b48b904c0a49841a2d03b20ee9fca7b52bdc9"
	Sep 19 18:51:40 addons-700000 kubelet[2049]: I0919 18:51:40.735466    2049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4b1b85b8ff94f54dc6d79eade97b48b904c0a49841a2d03b20ee9fca7b52bdc9"} err="failed to get container status \"4b1b85b8ff94f54dc6d79eade97b48b904c0a49841a2d03b20ee9fca7b52bdc9\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4b1b85b8ff94f54dc6d79eade97b48b904c0a49841a2d03b20ee9fca7b52bdc9"
	Sep 19 18:51:40 addons-700000 kubelet[2049]: I0919 18:51:40.939985    2049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16d19d45-47ec-4d58-b51b-8f83c981c5c1" path="/var/lib/kubelet/pods/16d19d45-47ec-4d58-b51b-8f83c981c5c1/volumes"
	Sep 19 18:51:46 addons-700000 kubelet[2049]: I0919 18:51:46.131399    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/170f7012-393e-4cb3-a1f7-6b06de675423-gcp-creds\") pod \"170f7012-393e-4cb3-a1f7-6b06de675423\" (UID: \"170f7012-393e-4cb3-a1f7-6b06de675423\") "
	Sep 19 18:51:46 addons-700000 kubelet[2049]: I0919 18:51:46.131428    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5nwr\" (UniqueName: \"kubernetes.io/projected/170f7012-393e-4cb3-a1f7-6b06de675423-kube-api-access-f5nwr\") pod \"170f7012-393e-4cb3-a1f7-6b06de675423\" (UID: \"170f7012-393e-4cb3-a1f7-6b06de675423\") "
	Sep 19 18:51:46 addons-700000 kubelet[2049]: I0919 18:51:46.131577    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/170f7012-393e-4cb3-a1f7-6b06de675423-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "170f7012-393e-4cb3-a1f7-6b06de675423" (UID: "170f7012-393e-4cb3-a1f7-6b06de675423"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:51:46 addons-700000 kubelet[2049]: I0919 18:51:46.135572    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170f7012-393e-4cb3-a1f7-6b06de675423-kube-api-access-f5nwr" (OuterVolumeSpecName: "kube-api-access-f5nwr") pod "170f7012-393e-4cb3-a1f7-6b06de675423" (UID: "170f7012-393e-4cb3-a1f7-6b06de675423"). InnerVolumeSpecName "kube-api-access-f5nwr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:51:46 addons-700000 kubelet[2049]: I0919 18:51:46.231803    2049 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/170f7012-393e-4cb3-a1f7-6b06de675423-gcp-creds\") on node \"addons-700000\" DevicePath \"\""
	Sep 19 18:51:46 addons-700000 kubelet[2049]: I0919 18:51:46.231820    2049 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-f5nwr\" (UniqueName: \"kubernetes.io/projected/170f7012-393e-4cb3-a1f7-6b06de675423-kube-api-access-f5nwr\") on node \"addons-700000\" DevicePath \"\""
	Sep 19 18:51:46 addons-700000 kubelet[2049]: I0919 18:51:46.433036    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khx2p\" (UniqueName: \"kubernetes.io/projected/8bdf6123-9742-4fb5-a3a6-2eb970734d28-kube-api-access-khx2p\") pod \"8bdf6123-9742-4fb5-a3a6-2eb970734d28\" (UID: \"8bdf6123-9742-4fb5-a3a6-2eb970734d28\") "
	Sep 19 18:51:46 addons-700000 kubelet[2049]: I0919 18:51:46.433070    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5zp4\" (UniqueName: \"kubernetes.io/projected/08fd319b-dcb5-4641-aaf4-0cde96c2f1c6-kube-api-access-j5zp4\") pod \"08fd319b-dcb5-4641-aaf4-0cde96c2f1c6\" (UID: \"08fd319b-dcb5-4641-aaf4-0cde96c2f1c6\") "
	Sep 19 18:51:46 addons-700000 kubelet[2049]: I0919 18:51:46.434344    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bdf6123-9742-4fb5-a3a6-2eb970734d28-kube-api-access-khx2p" (OuterVolumeSpecName: "kube-api-access-khx2p") pod "8bdf6123-9742-4fb5-a3a6-2eb970734d28" (UID: "8bdf6123-9742-4fb5-a3a6-2eb970734d28"). InnerVolumeSpecName "kube-api-access-khx2p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:51:46 addons-700000 kubelet[2049]: I0919 18:51:46.434501    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08fd319b-dcb5-4641-aaf4-0cde96c2f1c6-kube-api-access-j5zp4" (OuterVolumeSpecName: "kube-api-access-j5zp4") pod "08fd319b-dcb5-4641-aaf4-0cde96c2f1c6" (UID: "08fd319b-dcb5-4641-aaf4-0cde96c2f1c6"). InnerVolumeSpecName "kube-api-access-j5zp4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:51:46 addons-700000 kubelet[2049]: I0919 18:51:46.533675    2049 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-khx2p\" (UniqueName: \"kubernetes.io/projected/8bdf6123-9742-4fb5-a3a6-2eb970734d28-kube-api-access-khx2p\") on node \"addons-700000\" DevicePath \"\""
	Sep 19 18:51:46 addons-700000 kubelet[2049]: I0919 18:51:46.533692    2049 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j5zp4\" (UniqueName: \"kubernetes.io/projected/08fd319b-dcb5-4641-aaf4-0cde96c2f1c6-kube-api-access-j5zp4\") on node \"addons-700000\" DevicePath \"\""
	
	
	==> storage-provisioner [b5c52f7a93bf] <==
	I0919 18:39:19.696030       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:39:19.700315       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:39:19.700333       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:39:19.740529       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:39:19.740599       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-700000_11853a29-5f52-4b19-a895-7880786503ee!
	I0919 18:39:19.740991       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dfe183f6-9362-4f0b-b06c-0b257649a067", APIVersion:"v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-700000_11853a29-5f52-4b19-a895-7880786503ee became leader
	I0919 18:39:19.840738       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-700000_11853a29-5f52-4b19-a895-7880786503ee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-700000 -n addons-700000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-700000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-t98zj ingress-nginx-admission-patch-x6kgv
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-700000 describe pod busybox ingress-nginx-admission-create-t98zj ingress-nginx-admission-patch-x6kgv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-700000 describe pod busybox ingress-nginx-admission-create-t98zj ingress-nginx-admission-patch-x6kgv: exit status 1 (41.277625ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-700000/192.168.105.2
	Start Time:       Thu, 19 Sep 2024 11:42:34 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hbjtd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hbjtd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m13s                  default-scheduler  Successfully assigned default/busybox to addons-700000
	  Normal   Pulling    7m44s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m8s (x21 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-t98zj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-x6kgv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-700000 describe pod busybox ingress-nginx-admission-create-t98zj ingress-nginx-admission-patch-x6kgv: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.34s)

                                                
                                    
x
+
TestCertOptions (10.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-665000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-665000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.824995333s)

                                                
                                                
-- stdout --
	* [cert-options-665000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-665000" primary control-plane node in "cert-options-665000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-665000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-665000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-665000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-665000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-665000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.717417ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-665000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-665000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-665000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-665000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-665000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-665000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (39.636167ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-665000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-665000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-665000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-665000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-665000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-19 12:18:11.589317 -0700 PDT m=+2398.111556043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-665000 -n cert-options-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-665000 -n cert-options-665000: exit status 7 (30.750875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-665000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-665000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-665000
--- FAIL: TestCertOptions (10.09s)

                                                
                                    
x
+
TestCertExpiration (195.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-814000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-814000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.997292666s)

                                                
                                                
-- stdout --
	* [cert-expiration-814000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-814000" primary control-plane node in "cert-expiration-814000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-814000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-814000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-814000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
E0919 12:18:10.888241    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-814000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-814000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.204487542s)

                                                
                                                
-- stdout --
	* [cert-expiration-814000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-814000" primary control-plane node in "cert-expiration-814000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-814000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-814000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-814000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-814000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-814000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-814000" primary control-plane node in "cert-expiration-814000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-814000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-814000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-814000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-19 12:21:11.719583 -0700 PDT m=+2578.246746793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-814000 -n cert-expiration-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-814000 -n cert-expiration-814000: exit status 7 (34.440417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-814000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-814000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-814000
--- FAIL: TestCertExpiration (195.32s)

                                                
                                    
x
+
TestDockerFlags (10.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-971000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-971000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.879225667s)

                                                
                                                
-- stdout --
	* [docker-flags-971000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-971000" primary control-plane node in "docker-flags-971000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-971000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:17:51.526525    4501 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:17:51.526659    4501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:17:51.526662    4501 out.go:358] Setting ErrFile to fd 2...
	I0919 12:17:51.526665    4501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:17:51.526792    4501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:17:51.527907    4501 out.go:352] Setting JSON to false
	I0919 12:17:51.544116    4501 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2836,"bootTime":1726770635,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:17:51.544185    4501 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:17:51.551282    4501 out.go:177] * [docker-flags-971000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:17:51.559112    4501 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:17:51.559146    4501 notify.go:220] Checking for updates...
	I0919 12:17:51.566057    4501 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:17:51.569210    4501 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:17:51.572037    4501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:17:51.575125    4501 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:17:51.578112    4501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:17:51.581327    4501 config.go:182] Loaded profile config "force-systemd-flag-612000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:17:51.581394    4501 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:17:51.581453    4501 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:17:51.586082    4501 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:17:51.593064    4501 start.go:297] selected driver: qemu2
	I0919 12:17:51.593073    4501 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:17:51.593081    4501 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:17:51.595515    4501 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:17:51.598121    4501 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:17:51.601251    4501 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0919 12:17:51.601270    4501 cni.go:84] Creating CNI manager for ""
	I0919 12:17:51.601294    4501 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:17:51.601298    4501 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 12:17:51.601335    4501 start.go:340] cluster config:
	{Name:docker-flags-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:17:51.605103    4501 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:17:51.612093    4501 out.go:177] * Starting "docker-flags-971000" primary control-plane node in "docker-flags-971000" cluster
	I0919 12:17:51.615925    4501 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:17:51.615943    4501 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:17:51.615953    4501 cache.go:56] Caching tarball of preloaded images
	I0919 12:17:51.616019    4501 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:17:51.616025    4501 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:17:51.616081    4501 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/docker-flags-971000/config.json ...
	I0919 12:17:51.616093    4501 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/docker-flags-971000/config.json: {Name:mk398ea3b8e8d2ab3011d7c565a3eefb9185f106 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:17:51.616330    4501 start.go:360] acquireMachinesLock for docker-flags-971000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:17:51.616369    4501 start.go:364] duration metric: took 30.792µs to acquireMachinesLock for "docker-flags-971000"
	I0919 12:17:51.616383    4501 start.go:93] Provisioning new machine with config: &{Name:docker-flags-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:17:51.616431    4501 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:17:51.625958    4501 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 12:17:51.644606    4501 start.go:159] libmachine.API.Create for "docker-flags-971000" (driver="qemu2")
	I0919 12:17:51.644636    4501 client.go:168] LocalClient.Create starting
	I0919 12:17:51.644699    4501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:17:51.644727    4501 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:51.644736    4501 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:51.644776    4501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:17:51.644800    4501 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:51.644809    4501 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:51.645280    4501 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:17:51.805104    4501 main.go:141] libmachine: Creating SSH key...
	I0919 12:17:51.914445    4501 main.go:141] libmachine: Creating Disk image...
	I0919 12:17:51.914451    4501 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:17:51.914632    4501 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/disk.qcow2
	I0919 12:17:51.923758    4501 main.go:141] libmachine: STDOUT: 
	I0919 12:17:51.923777    4501 main.go:141] libmachine: STDERR: 
	I0919 12:17:51.923838    4501 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/disk.qcow2 +20000M
	I0919 12:17:51.931645    4501 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:17:51.931660    4501 main.go:141] libmachine: STDERR: 
	I0919 12:17:51.931675    4501 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/disk.qcow2
	I0919 12:17:51.931685    4501 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:17:51.931698    4501 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:17:51.931729    4501 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:d5:dd:70:00:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/disk.qcow2
	I0919 12:17:51.933331    4501 main.go:141] libmachine: STDOUT: 
	I0919 12:17:51.933362    4501 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:17:51.933383    4501 client.go:171] duration metric: took 288.747917ms to LocalClient.Create
	I0919 12:17:53.935496    4501 start.go:128] duration metric: took 2.319109792s to createHost
	I0919 12:17:53.935589    4501 start.go:83] releasing machines lock for "docker-flags-971000", held for 2.319268625s
	W0919 12:17:53.935639    4501 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:17:53.965729    4501 out.go:177] * Deleting "docker-flags-971000" in qemu2 ...
	W0919 12:17:53.991325    4501 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:17:53.991343    4501 start.go:729] Will try again in 5 seconds ...
	I0919 12:17:58.993314    4501 start.go:360] acquireMachinesLock for docker-flags-971000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:17:58.993536    4501 start.go:364] duration metric: took 176.667µs to acquireMachinesLock for "docker-flags-971000"
	I0919 12:17:58.993592    4501 start.go:93] Provisioning new machine with config: &{Name:docker-flags-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:17:58.993716    4501 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:17:59.008824    4501 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 12:17:59.047589    4501 start.go:159] libmachine.API.Create for "docker-flags-971000" (driver="qemu2")
	I0919 12:17:59.047637    4501 client.go:168] LocalClient.Create starting
	I0919 12:17:59.047733    4501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:17:59.047787    4501 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:59.047800    4501 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:59.047869    4501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:17:59.047922    4501 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:59.047935    4501 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:59.048764    4501 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:17:59.224314    4501 main.go:141] libmachine: Creating SSH key...
	I0919 12:17:59.308727    4501 main.go:141] libmachine: Creating Disk image...
	I0919 12:17:59.308732    4501 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:17:59.308915    4501 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/disk.qcow2
	I0919 12:17:59.317955    4501 main.go:141] libmachine: STDOUT: 
	I0919 12:17:59.317980    4501 main.go:141] libmachine: STDERR: 
	I0919 12:17:59.318038    4501 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/disk.qcow2 +20000M
	I0919 12:17:59.325859    4501 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:17:59.325877    4501 main.go:141] libmachine: STDERR: 
	I0919 12:17:59.325889    4501 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/disk.qcow2
	I0919 12:17:59.325894    4501 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:17:59.325904    4501 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:17:59.325934    4501 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:75:d1:61:89:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/docker-flags-971000/disk.qcow2
	I0919 12:17:59.327581    4501 main.go:141] libmachine: STDOUT: 
	I0919 12:17:59.327595    4501 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:17:59.327607    4501 client.go:171] duration metric: took 279.972083ms to LocalClient.Create
	I0919 12:18:01.329721    4501 start.go:128] duration metric: took 2.336041208s to createHost
	I0919 12:18:01.329806    4501 start.go:83] releasing machines lock for "docker-flags-971000", held for 2.336316417s
	W0919 12:18:01.330208    4501 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:18:01.345939    4501 out.go:201] 
	W0919 12:18:01.348988    4501 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:18:01.349015    4501 out.go:270] * 
	* 
	W0919 12:18:01.351569    4501 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:18:01.362868    4501 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-971000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-971000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-971000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.39275ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-971000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-971000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-971000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-971000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-971000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-971000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.536833ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-971000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-971000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-971000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-971000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-19 12:18:01.501573 -0700 PDT m=+2388.023536251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-971000 -n docker-flags-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-971000 -n docker-flags-971000: exit status 7 (29.721875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-971000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-971000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-971000
--- FAIL: TestDockerFlags (10.11s)

                                                
                                    
x
+
TestForceSystemdFlag (10.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-612000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-612000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.932045375s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-612000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-612000" primary control-plane node in "force-systemd-flag-612000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-612000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:17:46.422857    4480 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:17:46.423053    4480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:17:46.423130    4480 out.go:358] Setting ErrFile to fd 2...
	I0919 12:17:46.423133    4480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:17:46.423248    4480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:17:46.424507    4480 out.go:352] Setting JSON to false
	I0919 12:17:46.440877    4480 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2831,"bootTime":1726770635,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:17:46.440950    4480 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:17:46.448499    4480 out.go:177] * [force-systemd-flag-612000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:17:46.466437    4480 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:17:46.466504    4480 notify.go:220] Checking for updates...
	I0919 12:17:46.479409    4480 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:17:46.483347    4480 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:17:46.486392    4480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:17:46.489419    4480 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:17:46.492450    4480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:17:46.495776    4480 config.go:182] Loaded profile config "force-systemd-env-722000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:17:46.495858    4480 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:17:46.495919    4480 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:17:46.500435    4480 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:17:46.507386    4480 start.go:297] selected driver: qemu2
	I0919 12:17:46.507394    4480 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:17:46.507400    4480 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:17:46.509744    4480 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:17:46.512430    4480 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:17:46.515476    4480 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 12:17:46.515500    4480 cni.go:84] Creating CNI manager for ""
	I0919 12:17:46.515528    4480 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:17:46.515537    4480 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 12:17:46.515571    4480 start.go:340] cluster config:
	{Name:force-systemd-flag-612000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:17:46.519489    4480 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:17:46.527443    4480 out.go:177] * Starting "force-systemd-flag-612000" primary control-plane node in "force-systemd-flag-612000" cluster
	I0919 12:17:46.531340    4480 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:17:46.531357    4480 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:17:46.531371    4480 cache.go:56] Caching tarball of preloaded images
	I0919 12:17:46.531450    4480 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:17:46.531457    4480 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:17:46.531527    4480 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/force-systemd-flag-612000/config.json ...
	I0919 12:17:46.531540    4480 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/force-systemd-flag-612000/config.json: {Name:mkeca7f3efaf65325636b1adf30e6c12cdfdd989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:17:46.531789    4480 start.go:360] acquireMachinesLock for force-systemd-flag-612000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:17:46.531831    4480 start.go:364] duration metric: took 31.875µs to acquireMachinesLock for "force-systemd-flag-612000"
	I0919 12:17:46.531844    4480 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-612000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:17:46.531872    4480 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:17:46.540416    4480 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 12:17:46.560378    4480 start.go:159] libmachine.API.Create for "force-systemd-flag-612000" (driver="qemu2")
	I0919 12:17:46.560416    4480 client.go:168] LocalClient.Create starting
	I0919 12:17:46.560490    4480 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:17:46.560526    4480 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:46.560534    4480 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:46.560592    4480 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:17:46.560623    4480 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:46.560634    4480 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:46.561098    4480 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:17:46.723034    4480 main.go:141] libmachine: Creating SSH key...
	I0919 12:17:46.768777    4480 main.go:141] libmachine: Creating Disk image...
	I0919 12:17:46.768782    4480 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:17:46.768971    4480 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/disk.qcow2
	I0919 12:17:46.778172    4480 main.go:141] libmachine: STDOUT: 
	I0919 12:17:46.778186    4480 main.go:141] libmachine: STDERR: 
	I0919 12:17:46.778241    4480 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/disk.qcow2 +20000M
	I0919 12:17:46.786099    4480 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:17:46.786126    4480 main.go:141] libmachine: STDERR: 
	I0919 12:17:46.786142    4480 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/disk.qcow2
	I0919 12:17:46.786147    4480 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:17:46.786158    4480 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:17:46.786187    4480 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:65:7f:48:29:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/disk.qcow2
	I0919 12:17:46.787910    4480 main.go:141] libmachine: STDOUT: 
	I0919 12:17:46.787925    4480 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:17:46.787948    4480 client.go:171] duration metric: took 227.530041ms to LocalClient.Create
	I0919 12:17:48.790065    4480 start.go:128] duration metric: took 2.258233291s to createHost
	I0919 12:17:48.790114    4480 start.go:83] releasing machines lock for "force-systemd-flag-612000", held for 2.258334125s
	W0919 12:17:48.790174    4480 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:17:48.820266    4480 out.go:177] * Deleting "force-systemd-flag-612000" in qemu2 ...
	W0919 12:17:48.845151    4480 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:17:48.845168    4480 start.go:729] Will try again in 5 seconds ...
	I0919 12:17:53.847253    4480 start.go:360] acquireMachinesLock for force-systemd-flag-612000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:17:53.935699    4480 start.go:364] duration metric: took 88.309834ms to acquireMachinesLock for "force-systemd-flag-612000"
	I0919 12:17:53.935855    4480 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-612000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:17:53.936077    4480 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:17:53.952746    4480 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 12:17:54.004688    4480 start.go:159] libmachine.API.Create for "force-systemd-flag-612000" (driver="qemu2")
	I0919 12:17:54.004739    4480 client.go:168] LocalClient.Create starting
	I0919 12:17:54.004868    4480 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:17:54.004939    4480 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:54.004957    4480 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:54.005015    4480 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:17:54.005059    4480 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:54.005071    4480 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:54.005644    4480 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:17:54.188508    4480 main.go:141] libmachine: Creating SSH key...
	I0919 12:17:54.255402    4480 main.go:141] libmachine: Creating Disk image...
	I0919 12:17:54.255407    4480 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:17:54.255597    4480 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/disk.qcow2
	I0919 12:17:54.265051    4480 main.go:141] libmachine: STDOUT: 
	I0919 12:17:54.265070    4480 main.go:141] libmachine: STDERR: 
	I0919 12:17:54.265124    4480 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/disk.qcow2 +20000M
	I0919 12:17:54.272989    4480 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:17:54.273007    4480 main.go:141] libmachine: STDERR: 
	I0919 12:17:54.273019    4480 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/disk.qcow2
	I0919 12:17:54.273024    4480 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:17:54.273034    4480 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:17:54.273064    4480 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:0d:fc:5b:df:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-flag-612000/disk.qcow2
	I0919 12:17:54.274668    4480 main.go:141] libmachine: STDOUT: 
	I0919 12:17:54.274682    4480 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:17:54.274695    4480 client.go:171] duration metric: took 269.958875ms to LocalClient.Create
	I0919 12:17:56.276808    4480 start.go:128] duration metric: took 2.340765375s to createHost
	I0919 12:17:56.276848    4480 start.go:83] releasing machines lock for "force-systemd-flag-612000", held for 2.341188792s
	W0919 12:17:56.277230    4480 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-612000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-612000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:17:56.288479    4480 out.go:201] 
	W0919 12:17:56.302234    4480 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:17:56.302268    4480 out.go:270] * 
	* 
	W0919 12:17:56.304468    4480 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:17:56.312879    4480 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-612000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-612000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-612000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.245875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-612000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-612000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-612000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-19 12:17:56.407873 -0700 PDT m=+2382.929696626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-612000 -n force-systemd-flag-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-612000 -n force-systemd-flag-612000: exit status 7 (35.3325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-612000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-612000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-612000
--- FAIL: TestForceSystemdFlag (10.12s)

                                                
                                    
x
+
TestForceSystemdEnv (11.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-722000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I0919 12:17:41.098102    1618 install.go:79] stdout: 
W0919 12:17:41.098299    1618 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0919 12:17:41.098335    1618 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/001/docker-machine-driver-hyperkit]
I0919 12:17:41.112391    1618 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/001/docker-machine-driver-hyperkit]
I0919 12:17:41.123405    1618 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/001/docker-machine-driver-hyperkit]
I0919 12:17:41.132400    1618 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/001/docker-machine-driver-hyperkit]
I0919 12:17:41.149065    1618 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 12:17:41.149182    1618 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I0919 12:17:42.938240    1618 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0919 12:17:42.938261    1618 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0919 12:17:42.938305    1618 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0919 12:17:42.938344    1618 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/002/docker-machine-driver-hyperkit
I0919 12:17:43.338737    1618 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10898ad40 0x10898ad40 0x10898ad40 0x10898ad40 0x10898ad40 0x10898ad40 0x10898ad40] Decompressors:map[bz2:0x14000483a30 gz:0x14000483a38 tar:0x140004839e0 tar.bz2:0x140004839f0 tar.gz:0x14000483a00 tar.xz:0x14000483a10 tar.zst:0x14000483a20 tbz2:0x140004839f0 tgz:0x14000483a00 txz:0x14000483a10 tzst:0x14000483a20 xz:0x14000483a40 zip:0x14000483a50 zst:0x14000483a48] Getters:map[file:0x14000621f40 http:0x1400067d180 https:0x1400067d1d0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0919 12:17:43.338854    1618 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/002/docker-machine-driver-hyperkit
I0919 12:17:46.349889    1618 install.go:79] stdout: 
W0919 12:17:46.350063    1618 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0919 12:17:46.350087    1618 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/002/docker-machine-driver-hyperkit]
I0919 12:17:46.364450    1618 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/002/docker-machine-driver-hyperkit]
I0919 12:17:46.376219    1618 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/002/docker-machine-driver-hyperkit]
I0919 12:17:46.385078    1618 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-722000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.271039833s)

                                                
                                                
-- stdout --
	* [force-systemd-env-722000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-722000" primary control-plane node in "force-systemd-env-722000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-722000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:17:40.064747    4445 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:17:40.064941    4445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:17:40.064946    4445 out.go:358] Setting ErrFile to fd 2...
	I0919 12:17:40.064949    4445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:17:40.065225    4445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:17:40.066506    4445 out.go:352] Setting JSON to false
	I0919 12:17:40.082848    4445 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2825,"bootTime":1726770635,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:17:40.082916    4445 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:17:40.090126    4445 out.go:177] * [force-systemd-env-722000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:17:40.096921    4445 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:17:40.097002    4445 notify.go:220] Checking for updates...
	I0919 12:17:40.103921    4445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:17:40.106899    4445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:17:40.109910    4445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:17:40.112903    4445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:17:40.114454    4445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0919 12:17:40.118269    4445 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:17:40.118318    4445 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:17:40.122925    4445 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:17:40.128875    4445 start.go:297] selected driver: qemu2
	I0919 12:17:40.128883    4445 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:17:40.128894    4445 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:17:40.131325    4445 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:17:40.134953    4445 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:17:40.138043    4445 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 12:17:40.138062    4445 cni.go:84] Creating CNI manager for ""
	I0919 12:17:40.138093    4445 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:17:40.138097    4445 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 12:17:40.138119    4445 start.go:340] cluster config:
	{Name:force-systemd-env-722000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-722000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:17:40.141778    4445 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:17:40.148893    4445 out.go:177] * Starting "force-systemd-env-722000" primary control-plane node in "force-systemd-env-722000" cluster
	I0919 12:17:40.152890    4445 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:17:40.152913    4445 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:17:40.152924    4445 cache.go:56] Caching tarball of preloaded images
	I0919 12:17:40.152990    4445 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:17:40.152997    4445 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:17:40.153068    4445 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/force-systemd-env-722000/config.json ...
	I0919 12:17:40.153080    4445 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/force-systemd-env-722000/config.json: {Name:mk29b77f1d48df046507314bdfff24de4be4ed0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:17:40.153296    4445 start.go:360] acquireMachinesLock for force-systemd-env-722000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:17:40.153330    4445 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "force-systemd-env-722000"
	I0919 12:17:40.153340    4445 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-722000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-722000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:17:40.153365    4445 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:17:40.160838    4445 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 12:17:40.178884    4445 start.go:159] libmachine.API.Create for "force-systemd-env-722000" (driver="qemu2")
	I0919 12:17:40.178910    4445 client.go:168] LocalClient.Create starting
	I0919 12:17:40.178974    4445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:17:40.179004    4445 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:40.179013    4445 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:40.179060    4445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:17:40.179083    4445 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:40.179091    4445 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:40.179437    4445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:17:40.338740    4445 main.go:141] libmachine: Creating SSH key...
	I0919 12:17:40.369958    4445 main.go:141] libmachine: Creating Disk image...
	I0919 12:17:40.369964    4445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:17:40.370159    4445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/disk.qcow2
	I0919 12:17:40.379651    4445 main.go:141] libmachine: STDOUT: 
	I0919 12:17:40.379666    4445 main.go:141] libmachine: STDERR: 
	I0919 12:17:40.379728    4445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/disk.qcow2 +20000M
	I0919 12:17:40.387842    4445 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:17:40.387866    4445 main.go:141] libmachine: STDERR: 
	I0919 12:17:40.387882    4445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/disk.qcow2
	I0919 12:17:40.387887    4445 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:17:40.387900    4445 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:17:40.387927    4445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:4d:6a:db:23:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/disk.qcow2
	I0919 12:17:40.389606    4445 main.go:141] libmachine: STDOUT: 
	I0919 12:17:40.389619    4445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:17:40.389641    4445 client.go:171] duration metric: took 210.729ms to LocalClient.Create
	I0919 12:17:42.391679    4445 start.go:128] duration metric: took 2.238366417s to createHost
	I0919 12:17:42.391708    4445 start.go:83] releasing machines lock for "force-systemd-env-722000", held for 2.238434584s
	W0919 12:17:42.391721    4445 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:17:42.410112    4445 out.go:177] * Deleting "force-systemd-env-722000" in qemu2 ...
	W0919 12:17:42.423926    4445 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:17:42.423939    4445 start.go:729] Will try again in 5 seconds ...
	I0919 12:17:47.425974    4445 start.go:360] acquireMachinesLock for force-systemd-env-722000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:17:48.790295    4445 start.go:364] duration metric: took 1.364233125s to acquireMachinesLock for "force-systemd-env-722000"
	I0919 12:17:48.790396    4445 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-722000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-722000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:17:48.790645    4445 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:17:48.803309    4445 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 12:17:48.853744    4445 start.go:159] libmachine.API.Create for "force-systemd-env-722000" (driver="qemu2")
	I0919 12:17:48.853810    4445 client.go:168] LocalClient.Create starting
	I0919 12:17:48.853956    4445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:17:48.854027    4445 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:48.854044    4445 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:48.854119    4445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:17:48.854166    4445 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:48.854181    4445 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:48.854784    4445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:17:49.051143    4445 main.go:141] libmachine: Creating SSH key...
	I0919 12:17:49.225920    4445 main.go:141] libmachine: Creating Disk image...
	I0919 12:17:49.225926    4445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:17:49.226130    4445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/disk.qcow2
	I0919 12:17:49.235863    4445 main.go:141] libmachine: STDOUT: 
	I0919 12:17:49.235880    4445 main.go:141] libmachine: STDERR: 
	I0919 12:17:49.235941    4445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/disk.qcow2 +20000M
	I0919 12:17:49.243873    4445 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:17:49.243901    4445 main.go:141] libmachine: STDERR: 
	I0919 12:17:49.243914    4445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/disk.qcow2
	I0919 12:17:49.243918    4445 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:17:49.243924    4445 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:17:49.243956    4445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:5a:bc:ca:90:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/force-systemd-env-722000/disk.qcow2
	I0919 12:17:49.245692    4445 main.go:141] libmachine: STDOUT: 
	I0919 12:17:49.245709    4445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:17:49.245722    4445 client.go:171] duration metric: took 391.909ms to LocalClient.Create
	I0919 12:17:51.247137    4445 start.go:128] duration metric: took 2.456488333s to createHost
	I0919 12:17:51.247209    4445 start.go:83] releasing machines lock for "force-systemd-env-722000", held for 2.456938667s
	W0919 12:17:51.247587    4445 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-722000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-722000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:17:51.268376    4445 out.go:201] 
	W0919 12:17:51.278249    4445 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:17:51.278309    4445 out.go:270] * 
	* 
	W0919 12:17:51.280787    4445 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:17:51.291106    4445 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-722000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-722000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-722000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.382625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-722000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-722000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-722000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-19 12:17:51.387245 -0700 PDT m=+2377.908932168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-722000 -n force-systemd-env-722000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-722000 -n force-systemd-env-722000: exit status 7 (34.334542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-722000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-722000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-722000
--- FAIL: TestForceSystemdEnv (11.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-569000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-569000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-4sg7r" [643b1064-d44c-41f2-b682-2e352c58e2d0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-4sg7r" [643b1064-d44c-41f2-b682-2e352c58e2d0] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.011597208s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31760
functional_test.go:1661: error fetching http://192.168.105.4:31760: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
I0919 11:57:10.717810    1618 retry.go:31] will retry after 663.817685ms: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31760: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
I0919 11:57:11.385575    1618 retry.go:31] will retry after 877.87685ms: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31760: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
I0919 11:57:12.267247    1618 retry.go:31] will retry after 1.370145451s: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31760: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
I0919 11:57:13.641061    1618 retry.go:31] will retry after 4.940628322s: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
E0919 11:57:16.602637    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:31760: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
I0919 11:57:18.584828    1618 retry.go:31] will retry after 3.101900733s: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31760: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
I0919 11:57:21.690565    1618 retry.go:31] will retry after 6.179084721s: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31760: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
I0919 11:57:27.873256    1618 retry.go:31] will retry after 10.907614835s: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31760: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31760: Get "http://192.168.105.4:31760": dial tcp 192.168.105.4:31760: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-569000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-4sg7r
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-569000/192.168.105.4
Start Time:       Thu, 19 Sep 2024 11:57:03 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://438a4b85ef55133713efcfdd418be2eb1a21b0f2ec2711c4e379be19a42dd6a8
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 19 Sep 2024 11:57:16 -0700
Finished:     Thu, 19 Sep 2024 11:57:16 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mll7x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mll7x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  35s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-4sg7r to functional-569000
Normal   Pulled     22s (x3 over 34s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    22s (x3 over 34s)  kubelet            Created container echoserver-arm
Normal   Started    22s (x3 over 34s)  kubelet            Started container echoserver-arm
Warning  BackOff    8s (x3 over 33s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-4sg7r_default(643b1064-d44c-41f2-b682-2e352c58e2d0)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-569000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-569000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.37.63
IPs:                      10.103.37.63
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31760/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-569000 -n functional-569000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-569000 ssh findmnt                                                                                       | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-569000                                                                                                | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2215059764/001:/mount-9p     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh findmnt                                                                                       | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT | 19 Sep 24 11:57 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh -- ls                                                                                         | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT | 19 Sep 24 11:57 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh cat                                                                                           | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT | 19 Sep 24 11:57 PDT |
	|           | /mount-9p/test-1726772245621732000                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh stat                                                                                          | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT | 19 Sep 24 11:57 PDT |
	|           | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh stat                                                                                          | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT | 19 Sep 24 11:57 PDT |
	|           | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh sudo                                                                                          | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT | 19 Sep 24 11:57 PDT |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh findmnt                                                                                       | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-569000                                                                                                | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port178230314/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh findmnt                                                                                       | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT | 19 Sep 24 11:57 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh -- ls                                                                                         | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT | 19 Sep 24 11:57 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh sudo                                                                                          | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-569000                                                                                                | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2428917011/001:/mount2  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-569000                                                                                                | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2428917011/001:/mount1  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-569000                                                                                                | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2428917011/001:/mount3  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh findmnt                                                                                       | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh findmnt                                                                                       | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT | 19 Sep 24 11:57 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh findmnt                                                                                       | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT | 19 Sep 24 11:57 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-569000 ssh findmnt                                                                                       | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT | 19 Sep 24 11:57 PDT |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-569000                                                                                                | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-569000                                                                                                | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-569000 --dry-run                                                                                      | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-569000                                                                                                | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-569000 | jenkins | v1.34.0 | 19 Sep 24 11:57 PDT |                     |
	|           | -p functional-569000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 11:57:32
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 11:57:32.388602    2891 out.go:345] Setting OutFile to fd 1 ...
	I0919 11:57:32.388709    2891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:57:32.388711    2891 out.go:358] Setting ErrFile to fd 2...
	I0919 11:57:32.388714    2891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:57:32.388845    2891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 11:57:32.390215    2891 out.go:352] Setting JSON to false
	I0919 11:57:32.407903    2891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1617,"bootTime":1726770635,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 11:57:32.407999    2891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 11:57:32.412828    2891 out.go:177] * [functional-569000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0919 11:57:32.420619    2891 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 11:57:32.420660    2891 notify.go:220] Checking for updates...
	I0919 11:57:32.427794    2891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 11:57:32.429085    2891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 11:57:32.431751    2891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 11:57:32.434831    2891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 11:57:32.437814    2891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 11:57:32.441155    2891 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 11:57:32.441421    2891 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 11:57:32.445799    2891 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0919 11:57:32.452773    2891 start.go:297] selected driver: qemu2
	I0919 11:57:32.452780    2891 start.go:901] validating driver "qemu2" against &{Name:functional-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 11:57:32.452839    2891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 11:57:32.459804    2891 out.go:201] 
	W0919 11:57:32.463770    2891 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 11:57:32.467755    2891 out.go:201] 
	
	
	==> Docker <==
	Sep 19 18:57:33 functional-569000 dockerd[5655]: time="2024-09-19T18:57:33.431923725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 18:57:33 functional-569000 dockerd[5655]: time="2024-09-19T18:57:33.431954945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 18:57:33 functional-569000 dockerd[5655]: time="2024-09-19T18:57:33.431964573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 18:57:33 functional-569000 dockerd[5655]: time="2024-09-19T18:57:33.432174193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 18:57:33 functional-569000 dockerd[5655]: time="2024-09-19T18:57:33.433188072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 18:57:33 functional-569000 dockerd[5655]: time="2024-09-19T18:57:33.434084073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 18:57:33 functional-569000 dockerd[5655]: time="2024-09-19T18:57:33.434091826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 18:57:33 functional-569000 dockerd[5655]: time="2024-09-19T18:57:33.434116835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 18:57:33 functional-569000 cri-dockerd[5994]: time="2024-09-19T18:57:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3ab667af5b8ae831bc87e779c6fbd3361811afbdfc7440f23d907f96c91f1681/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 19 18:57:33 functional-569000 cri-dockerd[5994]: time="2024-09-19T18:57:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b74d1d2febdb846084d054e2051e8ad3eb013932af3ed78f932eedb2651ca24e/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 19 18:57:33 functional-569000 dockerd[5648]: time="2024-09-19T18:57:33.718406735Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 19 18:57:35 functional-569000 cri-dockerd[5994]: time="2024-09-19T18:57:35Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 19 18:57:35 functional-569000 dockerd[5655]: time="2024-09-19T18:57:35.463156711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 18:57:35 functional-569000 dockerd[5655]: time="2024-09-19T18:57:35.463226153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 18:57:35 functional-569000 dockerd[5655]: time="2024-09-19T18:57:35.463250913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 18:57:35 functional-569000 dockerd[5655]: time="2024-09-19T18:57:35.463294053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 18:57:35 functional-569000 dockerd[5648]: time="2024-09-19T18:57:35.588842588Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 19 18:57:39 functional-569000 dockerd[5655]: time="2024-09-19T18:57:39.420999394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 18:57:39 functional-569000 dockerd[5655]: time="2024-09-19T18:57:39.421028822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 18:57:39 functional-569000 dockerd[5655]: time="2024-09-19T18:57:39.421037158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 18:57:39 functional-569000 dockerd[5655]: time="2024-09-19T18:57:39.421066336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 18:57:39 functional-569000 dockerd[5655]: time="2024-09-19T18:57:39.485663345Z" level=info msg="shim disconnected" id=b25ad394da49775c49dd38dc4a4b6ecc3010745fff52985cf13d2e183c93e80c namespace=moby
	Sep 19 18:57:39 functional-569000 dockerd[5655]: time="2024-09-19T18:57:39.485694648Z" level=warning msg="cleaning up after shim disconnected" id=b25ad394da49775c49dd38dc4a4b6ecc3010745fff52985cf13d2e183c93e80c namespace=moby
	Sep 19 18:57:39 functional-569000 dockerd[5655]: time="2024-09-19T18:57:39.485699316Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 18:57:39 functional-569000 dockerd[5648]: time="2024-09-19T18:57:39.487949025Z" level=info msg="ignoring event" container=b25ad394da49775c49dd38dc4a4b6ecc3010745fff52985cf13d2e183c93e80c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED                  STATE               NAME                        ATTEMPT             POD ID              POD
	b25ad394da497       72565bf5bbedf                                                                                          Less than a second ago   Exited              echoserver-arm              3                   1c6d57b18ea0e       hello-node-64b4f8f9ff-m6fbq
	084c9be3c2118       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   4 seconds ago            Running             dashboard-metrics-scraper   0                   3ab667af5b8ae       dashboard-metrics-scraper-c5db448b4-v2cb7
	1923c7a80af3a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    11 seconds ago           Exited              mount-munger                0                   065c2b35631de       busybox-mount
	df029203b5cd3       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                          20 seconds ago           Running             myfrontend                  0                   afa5b3c664932       sp-pod
	438a4b85ef551       72565bf5bbedf                                                                                          23 seconds ago           Exited              echoserver-arm              2                   69b06ff43f54c       hello-node-connect-65d86f57f4-4sg7r
	f9245de616c76       72565bf5bbedf                                                                                          28 seconds ago           Exited              echoserver-arm              2                   1c6d57b18ea0e       hello-node-64b4f8f9ff-m6fbq
	4116ebdda7707       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                          43 seconds ago           Running             nginx                       0                   24937576e40ae       nginx-svc
	dcabda1f6aa38       2f6c962e7b831                                                                                          About a minute ago       Running             coredns                     2                   ad3190657ef93       coredns-7c65d6cfc9-f2lzm
	7361042a804e9       24a140c548c07                                                                                          About a minute ago       Running             kube-proxy                  2                   606ca80f2edc7       kube-proxy-bcwg5
	7b7760aa717c9       ba04bb24b9575                                                                                          About a minute ago       Running             storage-provisioner         2                   1d65e92308bbb       storage-provisioner
	0890dc4aba529       279f381cb3736                                                                                          About a minute ago       Running             kube-controller-manager     2                   c61ce66c0e0dd       kube-controller-manager-functional-569000
	14930ae4082c8       27e3830e14027                                                                                          About a minute ago       Running             etcd                        2                   07797feb3653b       etcd-functional-569000
	eb80d8040660b       7f8aa378bb47d                                                                                          About a minute ago       Running             kube-scheduler              2                   c18a092379b8e       kube-scheduler-functional-569000
	a4df2569d11f4       d3f53a98c0a9d                                                                                          About a minute ago       Running             kube-apiserver              0                   76489bcdea0fe       kube-apiserver-functional-569000
	e3acb149d4b23       2f6c962e7b831                                                                                          2 minutes ago            Exited              coredns                     1                   eb0a44db4024a       coredns-7c65d6cfc9-f2lzm
	c76d77c848a82       24a140c548c07                                                                                          2 minutes ago            Exited              kube-proxy                  1                   5f012765430da       kube-proxy-bcwg5
	a08f7463ca500       ba04bb24b9575                                                                                          2 minutes ago            Exited              storage-provisioner         1                   0d06e9d528f99       storage-provisioner
	d782ab8fb3923       7f8aa378bb47d                                                                                          2 minutes ago            Exited              kube-scheduler              1                   14bc6dcf2fef6       kube-scheduler-functional-569000
	b79bd157b9cf1       279f381cb3736                                                                                          2 minutes ago            Exited              kube-controller-manager     1                   86f2b285206c6       kube-controller-manager-functional-569000
	a7cdb0b8da4c3       27e3830e14027                                                                                          2 minutes ago            Exited              etcd                        1                   439561e5e44ce       etcd-functional-569000
	
	
	==> coredns [dcabda1f6aa3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35170 - 1442 "HINFO IN 3339593968748455611.8240923581543843509. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010698229s
	[INFO] 10.244.0.1:23488 - 27611 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000114379s
	[INFO] 10.244.0.1:43081 - 28723 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000083284s
	[INFO] 10.244.0.1:19681 - 61406 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000905198s
	[INFO] 10.244.0.1:27001 - 42772 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000115254s
	[INFO] 10.244.0.1:5761 - 40007 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000059607s
	[INFO] 10.244.0.1:37378 - 15858 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000181156s
	
	
	==> coredns [e3acb149d4b2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49552 - 58405 "HINFO IN 1648990091066377136.2208318223780286455. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004647042s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-569000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-569000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=functional-569000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T11_55_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:55:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-569000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:57:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:57:22 +0000   Thu, 19 Sep 2024 18:54:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:57:22 +0000   Thu, 19 Sep 2024 18:54:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:57:22 +0000   Thu, 19 Sep 2024 18:54:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:57:22 +0000   Thu, 19 Sep 2024 18:55:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-569000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 32d8b78a1d324fcdb83e96cd9082182a
	  System UUID:                32d8b78a1d324fcdb83e96cd9082182a
	  Boot ID:                    14c1598a-70d2-42a5-9c56-1232791d14be
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-m6fbq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  default                     hello-node-connect-65d86f57f4-4sg7r          0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 coredns-7c65d6cfc9-f2lzm                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m31s
	  kube-system                 etcd-functional-569000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m37s
	  kube-system                 kube-apiserver-functional-569000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-controller-manager-functional-569000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-proxy-bcwg5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-scheduler-functional-569000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-v2cb7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-7lddb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m31s                kube-proxy       
	  Normal  Starting                 77s                  kube-proxy       
	  Normal  Starting                 2m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m37s                kubelet          Node functional-569000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m37s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m37s                kubelet          Node functional-569000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m37s                kubelet          Node functional-569000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m37s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m33s                kubelet          Node functional-569000 status is now: NodeReady
	  Normal  RegisteredNode           2m32s                node-controller  Node functional-569000 event: Registered Node functional-569000 in Controller
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node functional-569000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node functional-569000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node functional-569000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m1s                 node-controller  Node functional-569000 event: Registered Node functional-569000 in Controller
	  Normal  Starting                 80s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s (x8 over 80s)    kubelet          Node functional-569000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x8 over 80s)    kubelet          Node functional-569000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x7 over 80s)    kubelet          Node functional-569000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           75s                  node-controller  Node functional-569000 event: Registered Node functional-569000 in Controller
	
	
	==> dmesg <==
	[  +0.057712] kauditd_printk_skb: 35 callbacks suppressed
	[Sep19 18:56] systemd-fstab-generator[5178]: Ignoring "noauto" option for root device
	[  +0.055510] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.117937] systemd-fstab-generator[5212]: Ignoring "noauto" option for root device
	[  +0.111710] systemd-fstab-generator[5224]: Ignoring "noauto" option for root device
	[  +0.107846] systemd-fstab-generator[5238]: Ignoring "noauto" option for root device
	[  +5.104735] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.432438] systemd-fstab-generator[5873]: Ignoring "noauto" option for root device
	[  +0.093307] systemd-fstab-generator[5885]: Ignoring "noauto" option for root device
	[  +0.085370] systemd-fstab-generator[5897]: Ignoring "noauto" option for root device
	[  +0.108203] systemd-fstab-generator[5986]: Ignoring "noauto" option for root device
	[  +0.209509] systemd-fstab-generator[6153]: Ignoring "noauto" option for root device
	[  +1.038833] systemd-fstab-generator[6277]: Ignoring "noauto" option for root device
	[  +3.414593] kauditd_printk_skb: 199 callbacks suppressed
	[ +15.612811] systemd-fstab-generator[7299]: Ignoring "noauto" option for root device
	[  +0.051508] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.291865] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.006560] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.188831] kauditd_printk_skb: 9 callbacks suppressed
	[Sep19 18:57] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.273067] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.166310] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.621432] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.351759] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.314268] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [14930ae4082c] <==
	{"level":"info","ts":"2024-09-19T18:56:20.876676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-19T18:56:20.876820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-19T18:56:20.876882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-19T18:56:20.876914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-19T18:56:20.877316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-19T18:56:20.877346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-19T18:56:20.877368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-19T18:56:20.879575Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-569000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T18:56:20.879673Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T18:56:20.880235Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T18:56:20.880437Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T18:56:20.880785Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T18:56:20.882699Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:56:20.882762Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:56:20.884858Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-19T18:56:20.885300Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T18:56:53.247658Z","caller":"traceutil/trace.go:171","msg":"trace[1131959547] transaction","detail":"{read_only:false; response_revision:653; number_of_response:1; }","duration":"111.355528ms","start":"2024-09-19T18:56:53.136294Z","end":"2024-09-19T18:56:53.247650Z","steps":["trace[1131959547] 'process raft request'  (duration: 111.232186ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:57:39.271833Z","caller":"traceutil/trace.go:171","msg":"trace[363191452] linearizableReadLoop","detail":"{readStateIndex:943; appliedIndex:942; }","duration":"191.851684ms","start":"2024-09-19T18:57:39.079971Z","end":"2024-09-19T18:57:39.271823Z","steps":["trace[363191452] 'read index received'  (duration: 191.750771ms)","trace[363191452] 'applied index is now lower than readState.Index'  (duration: 100.704µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:57:39.271877Z","caller":"traceutil/trace.go:171","msg":"trace[1404047780] transaction","detail":"{read_only:false; response_revision:869; number_of_response:1; }","duration":"192.554819ms","start":"2024-09-19T18:57:39.079319Z","end":"2024-09-19T18:57:39.271873Z","steps":["trace[1404047780] 'process raft request'  (duration: 192.430189ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:57:39.272041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.061178ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/hello-node-connect\" ","response":"range_response_count:1 size:497"}
	{"level":"info","ts":"2024-09-19T18:57:39.272054Z","caller":"traceutil/trace.go:171","msg":"trace[80781534] range","detail":"{range_begin:/registry/services/endpoints/default/hello-node-connect; range_end:; response_count:1; response_revision:869; }","duration":"192.082227ms","start":"2024-09-19T18:57:39.079970Z","end":"2024-09-19T18:57:39.272052Z","steps":["trace[80781534] 'agreement among raft nodes before linearized reading'  (duration: 192.015119ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:57:39.272100Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.416675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-19T18:57:39.272111Z","caller":"traceutil/trace.go:171","msg":"trace[1891427485] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:869; }","duration":"149.429346ms","start":"2024-09-19T18:57:39.122680Z","end":"2024-09-19T18:57:39.272109Z","steps":["trace[1891427485] 'agreement among raft nodes before linearized reading'  (duration: 149.411006ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:57:39.272160Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.343058ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:57:39.272170Z","caller":"traceutil/trace.go:171","msg":"trace[1899746907] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:869; }","duration":"105.354063ms","start":"2024-09-19T18:57:39.166814Z","end":"2024-09-19T18:57:39.272168Z","steps":["trace[1899746907] 'agreement among raft nodes before linearized reading'  (duration: 105.339307ms)"],"step_count":1}
	
	
	==> etcd [a7cdb0b8da4c] <==
	{"level":"info","ts":"2024-09-19T18:55:34.687784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-19T18:55:34.687866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-19T18:55:34.687902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-19T18:55:34.687964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-19T18:55:34.688019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-19T18:55:34.688072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-19T18:55:34.690896Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-569000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T18:55:34.691107Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T18:55:34.691463Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T18:55:34.691526Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T18:55:34.691570Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T18:55:34.693586Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:55:34.693586Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:55:34.695760Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-19T18:55:34.696152Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T18:56:04.956513Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-19T18:56:04.956550Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-569000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-19T18:56:04.956590Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T18:56:04.956604Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T18:56:04.956640Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T18:56:04.956673Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-19T18:56:04.965683Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-19T18:56:04.967325Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-19T18:56:04.967361Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-19T18:56:04.967370Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-569000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 18:57:39 up 2 min,  0 users,  load average: 0.52, 0.41, 0.18
	Linux functional-569000 5.10.207 #1 SMP PREEMPT Mon Sep 16 12:01:57 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a4df2569d11f] <==
	I0919 18:56:21.486378       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 18:56:21.486496       1 aggregator.go:171] initial CRD sync complete...
	I0919 18:56:21.486528       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 18:56:21.486556       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 18:56:21.486575       1 cache.go:39] Caches are synced for autoregister controller
	I0919 18:56:21.486843       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0919 18:56:21.486918       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0919 18:56:21.489081       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0919 18:56:21.534611       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 18:56:22.385560       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 18:56:22.662408       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0919 18:56:22.666747       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0919 18:56:22.681322       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 18:56:22.691632       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 18:56:22.693557       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 18:56:24.916618       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 18:56:25.067941       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 18:56:42.947223       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.46.162"}
	I0919 18:56:47.922385       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0919 18:56:47.965143       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.239.130"}
	I0919 18:56:53.266021       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.215.22"}
	I0919 18:57:03.696110       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.37.63"}
	I0919 18:57:33.040403       1 controller.go:615] quota admission added evaluator for: namespaces
	I0919 18:57:33.117772       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.147.38"}
	I0919 18:57:33.126574       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.97.35"}
	
	
	==> kube-controller-manager [0890dc4aba52] <==
	I0919 18:57:05.679773       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="47.102µs"
	I0919 18:57:11.790434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="62.441µs"
	I0919 18:57:16.099690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="41.266µs"
	I0919 18:57:16.876259       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="22.925µs"
	I0919 18:57:22.518268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-569000"
	I0919 18:57:25.066653       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="29.553µs"
	I0919 18:57:30.087246       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="108.332µs"
	I0919 18:57:33.074204       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.326429ms"
	E0919 18:57:33.074227       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0919 18:57:33.075519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.913704ms"
	E0919 18:57:33.075545       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0919 18:57:33.079155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.801711ms"
	E0919 18:57:33.079174       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0919 18:57:33.080260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="3.082776ms"
	E0919 18:57:33.080276       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0919 18:57:33.090907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.311099ms"
	I0919 18:57:33.094738       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.933454ms"
	I0919 18:57:33.101255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.485462ms"
	I0919 18:57:33.101291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="17.548µs"
	I0919 18:57:33.105923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="186.653µs"
	I0919 18:57:33.108288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="17.359146ms"
	I0919 18:57:33.108312       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.629µs"
	I0919 18:57:33.110368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.838µs"
	I0919 18:57:36.216201       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.615863ms"
	I0919 18:57:36.216724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="31.47µs"
	
	
	==> kube-controller-manager [b79bd157b9cf] <==
	I0919 18:55:38.573208       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0919 18:55:38.593318       1 shared_informer.go:320] Caches are synced for job
	I0919 18:55:38.593364       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0919 18:55:38.593346       1 shared_informer.go:320] Caches are synced for PVC protection
	I0919 18:55:38.594517       1 shared_informer.go:320] Caches are synced for persistent volume
	I0919 18:55:38.594587       1 shared_informer.go:320] Caches are synced for deployment
	I0919 18:55:38.594824       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0919 18:55:38.594852       1 shared_informer.go:320] Caches are synced for GC
	I0919 18:55:38.594871       1 shared_informer.go:320] Caches are synced for attach detach
	I0919 18:55:38.594932       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0919 18:55:38.594977       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0919 18:55:38.595002       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0919 18:55:38.595494       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0919 18:55:38.644787       1 shared_informer.go:320] Caches are synced for stateful set
	I0919 18:55:38.693968       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0919 18:55:38.693975       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0919 18:55:38.763997       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 18:55:38.796569       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="245.580015ms"
	I0919 18:55:38.796792       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="26.939µs"
	I0919 18:55:38.801609       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 18:55:39.215432       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 18:55:39.270557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 18:55:39.270625       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 18:55:40.114467       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="12.560385ms"
	I0919 18:55:40.115273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="204.501µs"
	
	
	==> kube-proxy [7361042a804e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 18:56:22.655406       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 18:56:22.659588       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0919 18:56:22.659674       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:56:22.678454       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 18:56:22.678476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 18:56:22.678490       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:56:22.679705       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:56:22.679846       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:56:22.679942       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:56:22.680717       1 config.go:199] "Starting service config controller"
	I0919 18:56:22.682029       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:56:22.681049       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:56:22.682046       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:56:22.682751       1 config.go:328] "Starting node config controller"
	I0919 18:56:22.682795       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:56:22.782686       1 shared_informer.go:320] Caches are synced for service config
	I0919 18:56:22.782686       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 18:56:22.782857       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c76d77c848a8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 18:55:36.364125       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 18:55:36.381762       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0919 18:55:36.381793       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:55:36.389443       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 18:55:36.389457       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 18:55:36.389468       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:55:36.390045       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:55:36.390162       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:55:36.390170       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:55:36.390767       1 config.go:199] "Starting service config controller"
	I0919 18:55:36.390778       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:55:36.390787       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:55:36.390790       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:55:36.390906       1 config.go:328] "Starting node config controller"
	I0919 18:55:36.390913       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:55:36.492056       1 shared_informer.go:320] Caches are synced for node config
	I0919 18:55:36.492056       1 shared_informer.go:320] Caches are synced for service config
	I0919 18:55:36.492128       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d782ab8fb392] <==
	I0919 18:55:33.938154       1 serving.go:386] Generated self-signed cert in-memory
	W0919 18:55:35.238848       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 18:55:35.238908       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 18:55:35.238927       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 18:55:35.238960       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 18:55:35.260047       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0919 18:55:35.260176       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:55:35.261294       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 18:55:35.262288       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 18:55:35.262556       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0919 18:55:35.262599       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 18:55:35.362807       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 18:56:04.948966       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [eb80d8040660] <==
	I0919 18:56:20.377948       1 serving.go:386] Generated self-signed cert in-memory
	W0919 18:56:21.410532       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 18:56:21.410625       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 18:56:21.410664       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 18:56:21.410681       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 18:56:21.440751       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0919 18:56:21.440846       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:56:21.442443       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0919 18:56:21.442758       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 18:56:21.444404       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 18:56:21.443236       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 18:56:21.547045       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 18:57:19 functional-569000 kubelet[6284]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:57:19 functional-569000 kubelet[6284]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:57:19 functional-569000 kubelet[6284]: I0919 18:57:19.068369    6284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2437eb6-6f44-4d20-bbae-c1479a6b2aa8" path="/var/lib/kubelet/pods/c2437eb6-6f44-4d20-bbae-c1479a6b2aa8/volumes"
	Sep 19 18:57:19 functional-569000 kubelet[6284]: I0919 18:57:19.124952    6284 scope.go:117] "RemoveContainer" containerID="3a88daa32685e61b2fea79f06656c57cf70aa1442e327ecc8cb401675ee7fe9c"
	Sep 19 18:57:25 functional-569000 kubelet[6284]: I0919 18:57:25.058482    6284 scope.go:117] "RemoveContainer" containerID="f9245de616c766837191a8b0660a70860196d7d4e3acf9bed0e094c854d6cce1"
	Sep 19 18:57:25 functional-569000 kubelet[6284]: E0919 18:57:25.058829    6284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-m6fbq_default(55ef2b4f-9eb4-493a-b51d-3822a3ca586c)\"" pod="default/hello-node-64b4f8f9ff-m6fbq" podUID="55ef2b4f-9eb4-493a-b51d-3822a3ca586c"
	Sep 19 18:57:25 functional-569000 kubelet[6284]: I0919 18:57:25.066566    6284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=7.329571037 podStartE2EDuration="8.06654671s" podCreationTimestamp="2024-09-19 18:57:17 +0000 UTC" firstStartedPulling="2024-09-19 18:57:18.405735291 +0000 UTC m=+59.419299387" lastFinishedPulling="2024-09-19 18:57:19.142711006 +0000 UTC m=+60.156275060" observedRunningTime="2024-09-19 18:57:19.976236619 +0000 UTC m=+60.989800756" watchObservedRunningTime="2024-09-19 18:57:25.06654671 +0000 UTC m=+66.080110806"
	Sep 19 18:57:26 functional-569000 kubelet[6284]: I0919 18:57:26.577599    6284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr4mv\" (UniqueName: \"kubernetes.io/projected/0dd69c6b-7567-4c72-bb5e-a1ea854310ad-kube-api-access-gr4mv\") pod \"busybox-mount\" (UID: \"0dd69c6b-7567-4c72-bb5e-a1ea854310ad\") " pod="default/busybox-mount"
	Sep 19 18:57:26 functional-569000 kubelet[6284]: I0919 18:57:26.577647    6284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/0dd69c6b-7567-4c72-bb5e-a1ea854310ad-test-volume\") pod \"busybox-mount\" (UID: \"0dd69c6b-7567-4c72-bb5e-a1ea854310ad\") " pod="default/busybox-mount"
	Sep 19 18:57:30 functional-569000 kubelet[6284]: I0919 18:57:30.066568    6284 scope.go:117] "RemoveContainer" containerID="438a4b85ef55133713efcfdd418be2eb1a21b0f2ec2711c4e379be19a42dd6a8"
	Sep 19 18:57:30 functional-569000 kubelet[6284]: E0919 18:57:30.067756    6284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-4sg7r_default(643b1064-d44c-41f2-b682-2e352c58e2d0)\"" pod="default/hello-node-connect-65d86f57f4-4sg7r" podUID="643b1064-d44c-41f2-b682-2e352c58e2d0"
	Sep 19 18:57:30 functional-569000 kubelet[6284]: I0919 18:57:30.416297    6284 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr4mv\" (UniqueName: \"kubernetes.io/projected/0dd69c6b-7567-4c72-bb5e-a1ea854310ad-kube-api-access-gr4mv\") pod \"0dd69c6b-7567-4c72-bb5e-a1ea854310ad\" (UID: \"0dd69c6b-7567-4c72-bb5e-a1ea854310ad\") "
	Sep 19 18:57:30 functional-569000 kubelet[6284]: I0919 18:57:30.416326    6284 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/0dd69c6b-7567-4c72-bb5e-a1ea854310ad-test-volume\") pod \"0dd69c6b-7567-4c72-bb5e-a1ea854310ad\" (UID: \"0dd69c6b-7567-4c72-bb5e-a1ea854310ad\") "
	Sep 19 18:57:30 functional-569000 kubelet[6284]: I0919 18:57:30.416384    6284 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dd69c6b-7567-4c72-bb5e-a1ea854310ad-test-volume" (OuterVolumeSpecName: "test-volume") pod "0dd69c6b-7567-4c72-bb5e-a1ea854310ad" (UID: "0dd69c6b-7567-4c72-bb5e-a1ea854310ad"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:57:30 functional-569000 kubelet[6284]: I0919 18:57:30.420164    6284 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd69c6b-7567-4c72-bb5e-a1ea854310ad-kube-api-access-gr4mv" (OuterVolumeSpecName: "kube-api-access-gr4mv") pod "0dd69c6b-7567-4c72-bb5e-a1ea854310ad" (UID: "0dd69c6b-7567-4c72-bb5e-a1ea854310ad"). InnerVolumeSpecName "kube-api-access-gr4mv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:57:30 functional-569000 kubelet[6284]: I0919 18:57:30.517329    6284 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gr4mv\" (UniqueName: \"kubernetes.io/projected/0dd69c6b-7567-4c72-bb5e-a1ea854310ad-kube-api-access-gr4mv\") on node \"functional-569000\" DevicePath \"\""
	Sep 19 18:57:30 functional-569000 kubelet[6284]: I0919 18:57:30.517376    6284 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/0dd69c6b-7567-4c72-bb5e-a1ea854310ad-test-volume\") on node \"functional-569000\" DevicePath \"\""
	Sep 19 18:57:31 functional-569000 kubelet[6284]: I0919 18:57:31.138552    6284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="065c2b35631ded50c002e2b02e4f0d68e3572cf52f773ceea2d356cf7fd0439b"
	Sep 19 18:57:33 functional-569000 kubelet[6284]: E0919 18:57:33.092335    6284 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0dd69c6b-7567-4c72-bb5e-a1ea854310ad" containerName="mount-munger"
	Sep 19 18:57:33 functional-569000 kubelet[6284]: I0919 18:57:33.092368    6284 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd69c6b-7567-4c72-bb5e-a1ea854310ad" containerName="mount-munger"
	Sep 19 18:57:33 functional-569000 kubelet[6284]: I0919 18:57:33.237387    6284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk826\" (UniqueName: \"kubernetes.io/projected/007aab6f-db1b-44ae-b757-def8b9381576-kube-api-access-nk826\") pod \"dashboard-metrics-scraper-c5db448b4-v2cb7\" (UID: \"007aab6f-db1b-44ae-b757-def8b9381576\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-v2cb7"
	Sep 19 18:57:33 functional-569000 kubelet[6284]: I0919 18:57:33.237410    6284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6g7\" (UniqueName: \"kubernetes.io/projected/eac867bb-2cb1-4164-b5e2-15fb6859e3ab-kube-api-access-4r6g7\") pod \"kubernetes-dashboard-695b96c756-7lddb\" (UID: \"eac867bb-2cb1-4164-b5e2-15fb6859e3ab\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-7lddb"
	Sep 19 18:57:33 functional-569000 kubelet[6284]: I0919 18:57:33.237423    6284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/007aab6f-db1b-44ae-b757-def8b9381576-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-v2cb7\" (UID: \"007aab6f-db1b-44ae-b757-def8b9381576\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-v2cb7"
	Sep 19 18:57:33 functional-569000 kubelet[6284]: I0919 18:57:33.237432    6284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eac867bb-2cb1-4164-b5e2-15fb6859e3ab-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-7lddb\" (UID: \"eac867bb-2cb1-4164-b5e2-15fb6859e3ab\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-7lddb"
	Sep 19 18:57:39 functional-569000 kubelet[6284]: I0919 18:57:39.059446    6284 scope.go:117] "RemoveContainer" containerID="f9245de616c766837191a8b0660a70860196d7d4e3acf9bed0e094c854d6cce1"
	
	
	==> storage-provisioner [7b7760aa717c] <==
	I0919 18:56:22.558099       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:56:22.570916       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:56:22.570935       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:56:39.979060       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:56:39.979551       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-569000_cc052b1a-9683-4b0e-90c7-ae30959cb8c5!
	I0919 18:56:39.980213       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0e1d32d0-51ef-4b92-be80-7c39573802ca", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-569000_cc052b1a-9683-4b0e-90c7-ae30959cb8c5 became leader
	I0919 18:56:40.080643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-569000_cc052b1a-9683-4b0e-90c7-ae30959cb8c5!
	I0919 18:57:04.652561       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0919 18:57:04.652600       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    c93a0a47-917a-415d-ade4-5e8061b85e03 309 0 2024-09-19 18:55:07 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-19 18:55:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-cc490d97-af44-4c34-a855-c362518a1aac &PersistentVolumeClaim{ObjectMeta:{myclaim  default  cc490d97-af44-4c34-a855-c362518a1aac 717 0 2024-09-19 18:57:04 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-19 18:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-19 18:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0919 18:57:04.652851       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-cc490d97-af44-4c34-a855-c362518a1aac" provisioned
	I0919 18:57:04.652858       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0919 18:57:04.652866       1 volume_store.go:212] Trying to save persistentvolume "pvc-cc490d97-af44-4c34-a855-c362518a1aac"
	I0919 18:57:04.653277       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"cc490d97-af44-4c34-a855-c362518a1aac", APIVersion:"v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0919 18:57:04.656513       1 volume_store.go:219] persistentvolume "pvc-cc490d97-af44-4c34-a855-c362518a1aac" saved
	I0919 18:57:04.656582       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"cc490d97-af44-4c34-a855-c362518a1aac", APIVersion:"v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-cc490d97-af44-4c34-a855-c362518a1aac
	
	
	==> storage-provisioner [a08f7463ca50] <==
	I0919 18:55:36.296447       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:55:36.315892       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:55:36.315918       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:55:36.327872       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:55:36.328083       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-569000_78f20f49-0050-4e52-9552-98099de45e10!
	I0919 18:55:36.328491       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0e1d32d0-51ef-4b92-be80-7c39573802ca", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-569000_78f20f49-0050-4e52-9552-98099de45e10 became leader
	I0919 18:55:36.428620       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-569000_78f20f49-0050-4e52-9552-98099de45e10!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-569000 -n functional-569000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-569000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-695b96c756-7lddb
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-569000 describe pod busybox-mount kubernetes-dashboard-695b96c756-7lddb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-569000 describe pod busybox-mount kubernetes-dashboard-695b96c756-7lddb: exit status 1 (40.394458ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-569000/192.168.105.4
	Start Time:       Thu, 19 Sep 2024 11:57:26 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://1923c7a80af3a65716072c66b83fe1dd413bc5b4529630604ea564e4fca58564
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 19 Sep 2024 11:57:28 -0700
	      Finished:     Thu, 19 Sep 2024 11:57:28 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gr4mv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-gr4mv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  13s   default-scheduler  Successfully assigned default/busybox-mount to functional-569000
	  Normal  Pulling    14s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     12s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.537s (1.537s including waiting). Image size: 3547125 bytes.
	  Normal  Created    12s   kubelet            Created container mount-munger
	  Normal  Started    12s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-7lddb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-569000 describe pod busybox-mount kubernetes-dashboard-695b96c756-7lddb: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (36.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (64.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 node stop m02 -v=7 --alsologtostderr
E0919 12:01:50.409434    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:01:52.972895    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:01:56.082059    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:01:58.096297    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-056000 node stop m02 -v=7 --alsologtostderr: (12.185345834s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr
E0919 12:02:08.339539    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:02:23.807435    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr: (25.967517125s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
E0919 12:02:28.821888    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 3 (25.977927542s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 12:02:53.553120    3583 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0919 12:02:53.553136    3583 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (64.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0919 12:03:09.783566    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (25.977633916s)
ha_test.go:413: expected profile "ha-056000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-056000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-056000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-056000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 3 (25.958716166s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 12:03:45.488309    3607 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0919 12:03:45.488326    3607 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (83.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-056000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.080666083s)

                                                
                                                
-- stdout --
	* Starting "ha-056000-m02" control-plane node in "ha-056000" cluster
	* Restarting existing qemu2 VM for "ha-056000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-056000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:03:45.521262    3615 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:03:45.521537    3615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:03:45.521540    3615 out.go:358] Setting ErrFile to fd 2...
	I0919 12:03:45.521543    3615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:03:45.521662    3615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:03:45.521918    3615 mustload.go:65] Loading cluster: ha-056000
	I0919 12:03:45.522150    3615 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0919 12:03:45.522395    3615 host.go:58] "ha-056000-m02" host status: Stopped
	I0919 12:03:45.526023    3615 out.go:177] * Starting "ha-056000-m02" control-plane node in "ha-056000" cluster
	I0919 12:03:45.530003    3615 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:03:45.530017    3615 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:03:45.530024    3615 cache.go:56] Caching tarball of preloaded images
	I0919 12:03:45.530097    3615 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:03:45.530103    3615 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:03:45.530157    3615 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/ha-056000/config.json ...
	I0919 12:03:45.530556    3615 start.go:360] acquireMachinesLock for ha-056000-m02: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:03:45.530614    3615 start.go:364] duration metric: took 29.334µs to acquireMachinesLock for "ha-056000-m02"
	I0919 12:03:45.530622    3615 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:03:45.530627    3615 fix.go:54] fixHost starting: m02
	I0919 12:03:45.530735    3615 fix.go:112] recreateIfNeeded on ha-056000-m02: state=Stopped err=<nil>
	W0919 12:03:45.530740    3615 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:03:45.534903    3615 out.go:177] * Restarting existing qemu2 VM for "ha-056000-m02" ...
	I0919 12:03:45.538001    3615 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:03:45.538047    3615 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:02:db:0a:90:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/disk.qcow2
	I0919 12:03:45.540775    3615 main.go:141] libmachine: STDOUT: 
	I0919 12:03:45.540791    3615 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:03:45.540822    3615 fix.go:56] duration metric: took 10.194ms for fixHost
	I0919 12:03:45.540829    3615 start.go:83] releasing machines lock for "ha-056000-m02", held for 10.207792ms
	W0919 12:03:45.540835    3615 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:03:45.540868    3615 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:03:45.540872    3615 start.go:729] Will try again in 5 seconds ...
	I0919 12:03:50.541633    3615 start.go:360] acquireMachinesLock for ha-056000-m02: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:03:50.541757    3615 start.go:364] duration metric: took 104.042µs to acquireMachinesLock for "ha-056000-m02"
	I0919 12:03:50.541790    3615 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:03:50.541794    3615 fix.go:54] fixHost starting: m02
	I0919 12:03:50.541961    3615 fix.go:112] recreateIfNeeded on ha-056000-m02: state=Stopped err=<nil>
	W0919 12:03:50.541970    3615 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:03:50.546353    3615 out.go:177] * Restarting existing qemu2 VM for "ha-056000-m02" ...
	I0919 12:03:50.549330    3615 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:03:50.549375    3615 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:02:db:0a:90:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/disk.qcow2
	I0919 12:03:50.551616    3615 main.go:141] libmachine: STDOUT: 
	I0919 12:03:50.551633    3615 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:03:50.551651    3615 fix.go:56] duration metric: took 9.857625ms for fixHost
	I0919 12:03:50.551655    3615 start.go:83] releasing machines lock for "ha-056000-m02", held for 9.892792ms
	W0919 12:03:50.551694    3615 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:03:50.555340    3615 out.go:201] 
	W0919 12:03:50.559454    3615 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:03:50.559458    3615 out.go:270] * 
	* 
	W0919 12:03:50.561246    3615 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:03:50.565398    3615 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0919 12:03:45.521262    3615 out.go:345] Setting OutFile to fd 1 ...
I0919 12:03:45.521537    3615 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 12:03:45.521540    3615 out.go:358] Setting ErrFile to fd 2...
I0919 12:03:45.521543    3615 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 12:03:45.521662    3615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
I0919 12:03:45.521918    3615 mustload.go:65] Loading cluster: ha-056000
I0919 12:03:45.522150    3615 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0919 12:03:45.522395    3615 host.go:58] "ha-056000-m02" host status: Stopped
I0919 12:03:45.526023    3615 out.go:177] * Starting "ha-056000-m02" control-plane node in "ha-056000" cluster
I0919 12:03:45.530003    3615 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0919 12:03:45.530017    3615 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0919 12:03:45.530024    3615 cache.go:56] Caching tarball of preloaded images
I0919 12:03:45.530097    3615 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0919 12:03:45.530103    3615 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0919 12:03:45.530157    3615 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/ha-056000/config.json ...
I0919 12:03:45.530556    3615 start.go:360] acquireMachinesLock for ha-056000-m02: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0919 12:03:45.530614    3615 start.go:364] duration metric: took 29.334µs to acquireMachinesLock for "ha-056000-m02"
I0919 12:03:45.530622    3615 start.go:96] Skipping create...Using existing machine configuration
I0919 12:03:45.530627    3615 fix.go:54] fixHost starting: m02
I0919 12:03:45.530735    3615 fix.go:112] recreateIfNeeded on ha-056000-m02: state=Stopped err=<nil>
W0919 12:03:45.530740    3615 fix.go:138] unexpected machine state, will restart: <nil>
I0919 12:03:45.534903    3615 out.go:177] * Restarting existing qemu2 VM for "ha-056000-m02" ...
I0919 12:03:45.538001    3615 qemu.go:418] Using hvf for hardware acceleration
I0919 12:03:45.538047    3615 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:02:db:0a:90:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/disk.qcow2
I0919 12:03:45.540775    3615 main.go:141] libmachine: STDOUT: 
I0919 12:03:45.540791    3615 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0919 12:03:45.540822    3615 fix.go:56] duration metric: took 10.194ms for fixHost
I0919 12:03:45.540829    3615 start.go:83] releasing machines lock for "ha-056000-m02", held for 10.207792ms
W0919 12:03:45.540835    3615 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0919 12:03:45.540868    3615 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0919 12:03:45.540872    3615 start.go:729] Will try again in 5 seconds ...
I0919 12:03:50.541633    3615 start.go:360] acquireMachinesLock for ha-056000-m02: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0919 12:03:50.541757    3615 start.go:364] duration metric: took 104.042µs to acquireMachinesLock for "ha-056000-m02"
I0919 12:03:50.541790    3615 start.go:96] Skipping create...Using existing machine configuration
I0919 12:03:50.541794    3615 fix.go:54] fixHost starting: m02
I0919 12:03:50.541961    3615 fix.go:112] recreateIfNeeded on ha-056000-m02: state=Stopped err=<nil>
W0919 12:03:50.541970    3615 fix.go:138] unexpected machine state, will restart: <nil>
I0919 12:03:50.546353    3615 out.go:177] * Restarting existing qemu2 VM for "ha-056000-m02" ...
I0919 12:03:50.549330    3615 qemu.go:418] Using hvf for hardware acceleration
I0919 12:03:50.549375    3615 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:02:db:0a:90:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000-m02/disk.qcow2
I0919 12:03:50.551616    3615 main.go:141] libmachine: STDOUT: 
I0919 12:03:50.551633    3615 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0919 12:03:50.551651    3615 fix.go:56] duration metric: took 9.857625ms for fixHost
I0919 12:03:50.551655    3615 start.go:83] releasing machines lock for "ha-056000-m02", held for 9.892792ms
W0919 12:03:50.551694    3615 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0919 12:03:50.555340    3615 out.go:201] 
W0919 12:03:50.559454    3615 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0919 12:03:50.559458    3615 out.go:270] * 
* 
W0919 12:03:50.561246    3615 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0919 12:03:50.565398    3615 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-056000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr: (25.956342667s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
E0919 12:04:31.704850    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:448: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (26.033139958s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.105.254:8443: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:450: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 3 (25.962353625s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 12:05:08.519442    3631 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0919 12:05:08.519455    3631 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (83.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-056000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-056000 -v=7 --alsologtostderr
E0919 12:06:47.820360    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:06:56.075516    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:07:15.543550    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-056000 -v=7 --alsologtostderr: (3m49.011718959s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-056000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-056000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.224669791s)

                                                
                                                
-- stdout --
	* [ha-056000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-056000" primary control-plane node in "ha-056000" cluster
	* Restarting existing qemu2 VM for "ha-056000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-056000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:09:00.856623    3693 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:09:00.856814    3693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:09:00.856819    3693 out.go:358] Setting ErrFile to fd 2...
	I0919 12:09:00.856822    3693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:09:00.856994    3693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:09:00.858261    3693 out.go:352] Setting JSON to false
	I0919 12:09:00.879042    3693 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2305,"bootTime":1726770635,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:09:00.879113    3693 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:09:00.883851    3693 out.go:177] * [ha-056000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:09:00.890776    3693 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:09:00.890830    3693 notify.go:220] Checking for updates...
	I0919 12:09:00.897840    3693 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:09:00.900827    3693 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:09:00.903787    3693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:09:00.906908    3693 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:09:00.909797    3693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:09:00.913189    3693 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:09:00.913246    3693 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:09:00.917828    3693 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:09:00.924769    3693 start.go:297] selected driver: qemu2
	I0919 12:09:00.924777    3693 start.go:901] validating driver "qemu2" against &{Name:ha-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-056000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:09:00.924854    3693 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:09:00.927505    3693 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:09:00.927531    3693 cni.go:84] Creating CNI manager for ""
	I0919 12:09:00.927554    3693 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 12:09:00.927604    3693 start.go:340] cluster config:
	{Name:ha-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-056000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:09:00.931795    3693 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:09:00.940755    3693 out.go:177] * Starting "ha-056000" primary control-plane node in "ha-056000" cluster
	I0919 12:09:00.944827    3693 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:09:00.944843    3693 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:09:00.944854    3693 cache.go:56] Caching tarball of preloaded images
	I0919 12:09:00.944920    3693 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:09:00.944926    3693 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:09:00.944993    3693 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/ha-056000/config.json ...
	I0919 12:09:00.945447    3693 start.go:360] acquireMachinesLock for ha-056000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:09:00.945483    3693 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "ha-056000"
	I0919 12:09:00.945492    3693 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:09:00.945499    3693 fix.go:54] fixHost starting: 
	I0919 12:09:00.945621    3693 fix.go:112] recreateIfNeeded on ha-056000: state=Stopped err=<nil>
	W0919 12:09:00.945630    3693 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:09:00.948868    3693 out.go:177] * Restarting existing qemu2 VM for "ha-056000" ...
	I0919 12:09:00.956743    3693 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:09:00.956774    3693 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:69:61:ff:69:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/disk.qcow2
	I0919 12:09:00.958928    3693 main.go:141] libmachine: STDOUT: 
	I0919 12:09:00.958946    3693 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:09:00.958976    3693 fix.go:56] duration metric: took 13.477583ms for fixHost
	I0919 12:09:00.958981    3693 start.go:83] releasing machines lock for "ha-056000", held for 13.49425ms
	W0919 12:09:00.958986    3693 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:09:00.959016    3693 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:09:00.959023    3693 start.go:729] Will try again in 5 seconds ...
	I0919 12:09:05.960777    3693 start.go:360] acquireMachinesLock for ha-056000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:09:05.961210    3693 start.go:364] duration metric: took 304.417µs to acquireMachinesLock for "ha-056000"
	I0919 12:09:05.961395    3693 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:09:05.961415    3693 fix.go:54] fixHost starting: 
	I0919 12:09:05.962137    3693 fix.go:112] recreateIfNeeded on ha-056000: state=Stopped err=<nil>
	W0919 12:09:05.962162    3693 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:09:05.966555    3693 out.go:177] * Restarting existing qemu2 VM for "ha-056000" ...
	I0919 12:09:05.973520    3693 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:09:05.973742    3693 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:69:61:ff:69:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/disk.qcow2
	I0919 12:09:05.982720    3693 main.go:141] libmachine: STDOUT: 
	I0919 12:09:05.982810    3693 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:09:05.982879    3693 fix.go:56] duration metric: took 21.469042ms for fixHost
	I0919 12:09:05.982894    3693 start.go:83] releasing machines lock for "ha-056000", held for 21.622833ms
	W0919 12:09:05.983044    3693 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:09:05.990529    3693 out.go:201] 
	W0919 12:09:05.994589    3693 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:09:05.994614    3693 out.go:270] * 
	* 
	W0919 12:09:05.997233    3693 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:09:06.003581    3693 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-056000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-056000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (33.026917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-056000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.367458ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-056000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-056000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:09:06.144699    3705 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:09:06.144927    3705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:09:06.144931    3705 out.go:358] Setting ErrFile to fd 2...
	I0919 12:09:06.144933    3705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:09:06.145061    3705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:09:06.145304    3705 mustload.go:65] Loading cluster: ha-056000
	I0919 12:09:06.145548    3705 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0919 12:09:06.145885    3705 out.go:270] ! The control-plane node ha-056000 host is not running (will try others): state=Stopped
	! The control-plane node ha-056000 host is not running (will try others): state=Stopped
	W0919 12:09:06.145997    3705 out.go:270] ! The control-plane node ha-056000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-056000-m02 host is not running (will try others): state=Stopped
	I0919 12:09:06.150656    3705 out.go:177] * The control-plane node ha-056000-m03 host is not running: state=Stopped
	I0919 12:09:06.153605    3705 out.go:177]   To start a cluster, run: "minikube start -p ha-056000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-056000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr: exit status 7 (30.279083ms)

                                                
                                                
-- stdout --
	ha-056000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:09:06.185609    3707 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:09:06.185779    3707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:09:06.185782    3707 out.go:358] Setting ErrFile to fd 2...
	I0919 12:09:06.185785    3707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:09:06.185912    3707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:09:06.186045    3707 out.go:352] Setting JSON to false
	I0919 12:09:06.186057    3707 mustload.go:65] Loading cluster: ha-056000
	I0919 12:09:06.186104    3707 notify.go:220] Checking for updates...
	I0919 12:09:06.186288    3707 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:09:06.186299    3707 status.go:174] checking status of ha-056000 ...
	I0919 12:09:06.186540    3707 status.go:364] ha-056000 host status = "Stopped" (err=<nil>)
	I0919 12:09:06.186543    3707 status.go:377] host is not running, skipping remaining checks
	I0919 12:09:06.186545    3707 status.go:176] ha-056000 status: &{Name:ha-056000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 12:09:06.186555    3707 status.go:174] checking status of ha-056000-m02 ...
	I0919 12:09:06.186646    3707 status.go:364] ha-056000-m02 host status = "Stopped" (err=<nil>)
	I0919 12:09:06.186649    3707 status.go:377] host is not running, skipping remaining checks
	I0919 12:09:06.186650    3707 status.go:176] ha-056000-m02 status: &{Name:ha-056000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 12:09:06.186654    3707 status.go:174] checking status of ha-056000-m03 ...
	I0919 12:09:06.186749    3707 status.go:364] ha-056000-m03 host status = "Stopped" (err=<nil>)
	I0919 12:09:06.186752    3707 status.go:377] host is not running, skipping remaining checks
	I0919 12:09:06.186754    3707 status.go:176] ha-056000-m03 status: &{Name:ha-056000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 12:09:06.186758    3707 status.go:174] checking status of ha-056000-m04 ...
	I0919 12:09:06.186855    3707 status.go:364] ha-056000-m04 host status = "Stopped" (err=<nil>)
	I0919 12:09:06.186857    3707 status.go:377] host is not running, skipping remaining checks
	I0919 12:09:06.186859    3707 status.go:176] ha-056000-m04 status: &{Name:ha-056000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (30.517583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-056000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-056000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-056000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-056000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"k
ubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\"
:\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (30.493875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 stop -v=7 --alsologtostderr
E0919 12:11:47.811300    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:11:56.066259    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-056000 stop -v=7 --alsologtostderr: (3m21.979475666s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr: exit status 7 (71.208708ms)

                                                
                                                
-- stdout --
	ha-056000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:12:28.339267    3752 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:12:28.339531    3752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:12:28.339537    3752 out.go:358] Setting ErrFile to fd 2...
	I0919 12:12:28.339540    3752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:12:28.339698    3752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:12:28.339890    3752 out.go:352] Setting JSON to false
	I0919 12:12:28.339903    3752 mustload.go:65] Loading cluster: ha-056000
	I0919 12:12:28.339954    3752 notify.go:220] Checking for updates...
	I0919 12:12:28.340251    3752 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:12:28.340262    3752 status.go:174] checking status of ha-056000 ...
	I0919 12:12:28.340558    3752 status.go:364] ha-056000 host status = "Stopped" (err=<nil>)
	I0919 12:12:28.340563    3752 status.go:377] host is not running, skipping remaining checks
	I0919 12:12:28.340566    3752 status.go:176] ha-056000 status: &{Name:ha-056000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 12:12:28.340579    3752 status.go:174] checking status of ha-056000-m02 ...
	I0919 12:12:28.340709    3752 status.go:364] ha-056000-m02 host status = "Stopped" (err=<nil>)
	I0919 12:12:28.340713    3752 status.go:377] host is not running, skipping remaining checks
	I0919 12:12:28.340715    3752 status.go:176] ha-056000-m02 status: &{Name:ha-056000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 12:12:28.340720    3752 status.go:174] checking status of ha-056000-m03 ...
	I0919 12:12:28.340854    3752 status.go:364] ha-056000-m03 host status = "Stopped" (err=<nil>)
	I0919 12:12:28.340859    3752 status.go:377] host is not running, skipping remaining checks
	I0919 12:12:28.340861    3752 status.go:176] ha-056000-m03 status: &{Name:ha-056000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 12:12:28.340865    3752 status.go:174] checking status of ha-056000-m04 ...
	I0919 12:12:28.340987    3752 status.go:364] ha-056000-m04 host status = "Stopped" (err=<nil>)
	I0919 12:12:28.340990    3752 status.go:377] host is not running, skipping remaining checks
	I0919 12:12:28.340993    3752 status.go:176] ha-056000-m04 status: &{Name:ha-056000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": ha-056000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": ha-056000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": ha-056000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (32.421375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-056000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-056000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.179978s)

                                                
                                                
-- stdout --
	* [ha-056000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-056000" primary control-plane node in "ha-056000" cluster
	* Restarting existing qemu2 VM for "ha-056000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-056000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:12:28.403088    3756 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:12:28.403228    3756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:12:28.403232    3756 out.go:358] Setting ErrFile to fd 2...
	I0919 12:12:28.403234    3756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:12:28.403373    3756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:12:28.404370    3756 out.go:352] Setting JSON to false
	I0919 12:12:28.420449    3756 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2513,"bootTime":1726770635,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:12:28.420524    3756 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:12:28.425163    3756 out.go:177] * [ha-056000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:12:28.430938    3756 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:12:28.431011    3756 notify.go:220] Checking for updates...
	I0919 12:12:28.437921    3756 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:12:28.440928    3756 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:12:28.443948    3756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:12:28.446835    3756 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:12:28.449915    3756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:12:28.453214    3756 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:12:28.453493    3756 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:12:28.456800    3756 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:12:28.463897    3756 start.go:297] selected driver: qemu2
	I0919 12:12:28.463904    3756 start.go:901] validating driver "qemu2" against &{Name:ha-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-056000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:12:28.463969    3756 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:12:28.466200    3756 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:12:28.466223    3756 cni.go:84] Creating CNI manager for ""
	I0919 12:12:28.466243    3756 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 12:12:28.466286    3756 start.go:340] cluster config:
	{Name:ha-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-056000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:12:28.469796    3756 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:12:28.477866    3756 out.go:177] * Starting "ha-056000" primary control-plane node in "ha-056000" cluster
	I0919 12:12:28.481874    3756 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:12:28.481888    3756 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:12:28.481898    3756 cache.go:56] Caching tarball of preloaded images
	I0919 12:12:28.481952    3756 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:12:28.481958    3756 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:12:28.482029    3756 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/ha-056000/config.json ...
	I0919 12:12:28.482464    3756 start.go:360] acquireMachinesLock for ha-056000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:12:28.482500    3756 start.go:364] duration metric: took 29.041µs to acquireMachinesLock for "ha-056000"
	I0919 12:12:28.482508    3756 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:12:28.482517    3756 fix.go:54] fixHost starting: 
	I0919 12:12:28.482638    3756 fix.go:112] recreateIfNeeded on ha-056000: state=Stopped err=<nil>
	W0919 12:12:28.482646    3756 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:12:28.486943    3756 out.go:177] * Restarting existing qemu2 VM for "ha-056000" ...
	I0919 12:12:28.494843    3756 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:12:28.494880    3756 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:69:61:ff:69:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/disk.qcow2
	I0919 12:12:28.496871    3756 main.go:141] libmachine: STDOUT: 
	I0919 12:12:28.496889    3756 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:12:28.496919    3756 fix.go:56] duration metric: took 14.404167ms for fixHost
	I0919 12:12:28.496923    3756 start.go:83] releasing machines lock for "ha-056000", held for 14.41925ms
	W0919 12:12:28.496928    3756 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:12:28.496976    3756 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:12:28.496981    3756 start.go:729] Will try again in 5 seconds ...
	I0919 12:12:33.499001    3756 start.go:360] acquireMachinesLock for ha-056000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:12:33.499395    3756 start.go:364] duration metric: took 302.708µs to acquireMachinesLock for "ha-056000"
	I0919 12:12:33.499516    3756 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:12:33.499534    3756 fix.go:54] fixHost starting: 
	I0919 12:12:33.500214    3756 fix.go:112] recreateIfNeeded on ha-056000: state=Stopped err=<nil>
	W0919 12:12:33.500247    3756 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:12:33.508605    3756 out.go:177] * Restarting existing qemu2 VM for "ha-056000" ...
	I0919 12:12:33.512597    3756 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:12:33.512875    3756 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:69:61:ff:69:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/ha-056000/disk.qcow2
	I0919 12:12:33.522219    3756 main.go:141] libmachine: STDOUT: 
	I0919 12:12:33.522285    3756 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:12:33.522346    3756 fix.go:56] duration metric: took 22.814291ms for fixHost
	I0919 12:12:33.522363    3756 start.go:83] releasing machines lock for "ha-056000", held for 22.948375ms
	W0919 12:12:33.522535    3756 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:12:33.529454    3756 out.go:201] 
	W0919 12:12:33.533692    3756 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:12:33.533733    3756 out.go:270] * 
	* 
	W0919 12:12:33.536281    3756 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:12:33.547061    3756 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-056000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (69.418542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-056000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-056000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-056000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-056000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"k
ubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\"
:\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (30.081958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-056000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-056000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.358292ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-056000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-056000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:12:33.737317    3771 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:12:33.737461    3771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:12:33.737464    3771 out.go:358] Setting ErrFile to fd 2...
	I0919 12:12:33.737466    3771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:12:33.737603    3771 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:12:33.737815    3771 mustload.go:65] Loading cluster: ha-056000
	I0919 12:12:33.738081    3771 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0919 12:12:33.738402    3771 out.go:270] ! The control-plane node ha-056000 host is not running (will try others): state=Stopped
	! The control-plane node ha-056000 host is not running (will try others): state=Stopped
	W0919 12:12:33.738509    3771 out.go:270] ! The control-plane node ha-056000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-056000-m02 host is not running (will try others): state=Stopped
	I0919 12:12:33.742682    3771 out.go:177] * The control-plane node ha-056000-m03 host is not running: state=Stopped
	I0919 12:12:33.746671    3771 out.go:177]   To start a cluster, run: "minikube start -p ha-056000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-056000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (30.522541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-687000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-687000 --driver=qemu2 : exit status 80 (9.996324667s)

                                                
                                                
-- stdout --
	* [image-687000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-687000" primary control-plane node in "image-687000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-687000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-687000 -n image-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-687000 -n image-687000: exit status 7 (69.467416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-687000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (10.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-885000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-885000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (10.037502417s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fb241259-89c6-4bf6-a618-a9e6b3c78fb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-885000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6f1858c9-d382-410b-9bf1-0d3659133cb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"3abc0315-7550-4bcc-928d-3658200fc671","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig"}}
	{"specversion":"1.0","id":"4c761816-70bf-4de8-9dea-443488140b26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"0d9568fd-c6fb-4e97-9abb-da5a0f0a5521","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d7b4a1e1-c074-4211-ab0b-8d169368711a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube"}}
	{"specversion":"1.0","id":"fa649aac-f45c-47ee-b129-78dc35559c26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"af88317c-5c2d-4786-abb7-5a6d9b36f3c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"538cd74c-0d52-4075-8a9b-f75812b34c78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"f208741b-ccab-4b2f-a49c-306a608f21c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-885000\" primary control-plane node in \"json-output-885000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"197add1e-4a2c-4b98-b809-0bcfba659b6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"20e49f11-3994-407a-8d61-44aca2599d55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-885000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"46ff89ca-ea9d-4eee-898b-1e01671eabb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"56c9b6df-6cf9-4070-a9f6-40006567487d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"8e7a0ca2-6cf1-46cf-8665-c41f6588bab7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-885000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"88e37b73-1738-4077-b279-f9c0186a99d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"05b152bb-7f82-46c4-8da2-ec4a6ef11bc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-885000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (10.04s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-885000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-885000 --output=json --user=testUser: exit status 83 (76.592167ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"317999e4-c82d-4f17-b612-5f7f5ec98b92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-885000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"aabc9b41-6b52-432b-9523-3db624de8682","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-885000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-885000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-885000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-885000 --output=json --user=testUser: exit status 83 (44.249458ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-885000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-885000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-885000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-885000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-042000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-042000 --driver=qemu2 : exit status 80 (9.826208792s)

                                                
                                                
-- stdout --
	* [first-042000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-042000" primary control-plane node in "first-042000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-042000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-042000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-042000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-19 12:13:06.731558 -0700 PDT m=+2093.245462793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-044000 -n second-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-044000 -n second-044000: exit status 85 (80.856709ms)

                                                
                                                
-- stdout --
	* Profile "second-044000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-044000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-044000" host is not running, skipping log retrieval (state="* Profile \"second-044000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-044000\"")
helpers_test.go:175: Cleaning up "second-044000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-044000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-19 12:13:06.921332 -0700 PDT m=+2093.435241293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-042000 -n first-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-042000 -n first-042000: exit status 7 (30.281375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-042000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-042000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-042000
--- FAIL: TestMinikubeProfile (10.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-449000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-449000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.056529583s)

                                                
                                                
-- stdout --
	* [mount-start-1-449000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-449000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-449000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-449000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-449000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-449000 -n mount-start-1-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-449000 -n mount-start-1-449000: exit status 7 (70.820709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-449000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.13s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-327000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0919 12:13:19.153011    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-327000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.892579458s)

                                                
                                                
-- stdout --
	* [multinode-327000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-327000" primary control-plane node in "multinode-327000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-327000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:13:17.373987    3913 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:13:17.374122    3913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:13:17.374125    3913 out.go:358] Setting ErrFile to fd 2...
	I0919 12:13:17.374127    3913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:13:17.374254    3913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:13:17.375344    3913 out.go:352] Setting JSON to false
	I0919 12:13:17.391474    3913 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2562,"bootTime":1726770635,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:13:17.391537    3913 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:13:17.398407    3913 out.go:177] * [multinode-327000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:13:17.407375    3913 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:13:17.407414    3913 notify.go:220] Checking for updates...
	I0919 12:13:17.415311    3913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:13:17.418339    3913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:13:17.421247    3913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:13:17.424349    3913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:13:17.427314    3913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:13:17.430547    3913 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:13:17.435279    3913 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:13:17.442337    3913 start.go:297] selected driver: qemu2
	I0919 12:13:17.442345    3913 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:13:17.442354    3913 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:13:17.444742    3913 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:13:17.448432    3913 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:13:17.451352    3913 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:13:17.451367    3913 cni.go:84] Creating CNI manager for ""
	I0919 12:13:17.451385    3913 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 12:13:17.451389    3913 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 12:13:17.451425    3913 start.go:340] cluster config:
	{Name:multinode-327000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-327000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:13:17.455247    3913 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:13:17.462315    3913 out.go:177] * Starting "multinode-327000" primary control-plane node in "multinode-327000" cluster
	I0919 12:13:17.466323    3913 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:13:17.466337    3913 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:13:17.466346    3913 cache.go:56] Caching tarball of preloaded images
	I0919 12:13:17.466407    3913 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:13:17.466413    3913 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:13:17.466631    3913 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/multinode-327000/config.json ...
	I0919 12:13:17.466643    3913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/multinode-327000/config.json: {Name:mk64dba6a761c48a5e1641d1ffdfad1e8f4e9427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:13:17.466870    3913 start.go:360] acquireMachinesLock for multinode-327000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:13:17.466906    3913 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "multinode-327000"
	I0919 12:13:17.466917    3913 start.go:93] Provisioning new machine with config: &{Name:multinode-327000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-327000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:13:17.466950    3913 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:13:17.476355    3913 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:13:17.495329    3913 start.go:159] libmachine.API.Create for "multinode-327000" (driver="qemu2")
	I0919 12:13:17.495359    3913 client.go:168] LocalClient.Create starting
	I0919 12:13:17.495412    3913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:13:17.495443    3913 main.go:141] libmachine: Decoding PEM data...
	I0919 12:13:17.495452    3913 main.go:141] libmachine: Parsing certificate...
	I0919 12:13:17.495494    3913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:13:17.495527    3913 main.go:141] libmachine: Decoding PEM data...
	I0919 12:13:17.495540    3913 main.go:141] libmachine: Parsing certificate...
	I0919 12:13:17.495990    3913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:13:17.655650    3913 main.go:141] libmachine: Creating SSH key...
	I0919 12:13:17.779641    3913 main.go:141] libmachine: Creating Disk image...
	I0919 12:13:17.779647    3913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:13:17.779822    3913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2
	I0919 12:13:17.789061    3913 main.go:141] libmachine: STDOUT: 
	I0919 12:13:17.789081    3913 main.go:141] libmachine: STDERR: 
	I0919 12:13:17.789142    3913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2 +20000M
	I0919 12:13:17.796958    3913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:13:17.796971    3913 main.go:141] libmachine: STDERR: 
	I0919 12:13:17.796984    3913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2
	I0919 12:13:17.796992    3913 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:13:17.797002    3913 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:13:17.797028    3913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:1a:b5:46:f5:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2
	I0919 12:13:17.798620    3913 main.go:141] libmachine: STDOUT: 
	I0919 12:13:17.798633    3913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:13:17.798656    3913 client.go:171] duration metric: took 303.300042ms to LocalClient.Create
	I0919 12:13:19.800902    3913 start.go:128] duration metric: took 2.333979791s to createHost
	I0919 12:13:19.800981    3913 start.go:83] releasing machines lock for "multinode-327000", held for 2.334127667s
	W0919 12:13:19.801035    3913 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:13:19.819150    3913 out.go:177] * Deleting "multinode-327000" in qemu2 ...
	W0919 12:13:19.851413    3913 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:13:19.851436    3913 start.go:729] Will try again in 5 seconds ...
	I0919 12:13:24.853440    3913 start.go:360] acquireMachinesLock for multinode-327000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:13:24.853930    3913 start.go:364] duration metric: took 362µs to acquireMachinesLock for "multinode-327000"
	I0919 12:13:24.854038    3913 start.go:93] Provisioning new machine with config: &{Name:multinode-327000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-327000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:13:24.854349    3913 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:13:24.873062    3913 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:13:24.924226    3913 start.go:159] libmachine.API.Create for "multinode-327000" (driver="qemu2")
	I0919 12:13:24.924266    3913 client.go:168] LocalClient.Create starting
	I0919 12:13:24.924377    3913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:13:24.924446    3913 main.go:141] libmachine: Decoding PEM data...
	I0919 12:13:24.924469    3913 main.go:141] libmachine: Parsing certificate...
	I0919 12:13:24.924524    3913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:13:24.924568    3913 main.go:141] libmachine: Decoding PEM data...
	I0919 12:13:24.924583    3913 main.go:141] libmachine: Parsing certificate...
	I0919 12:13:24.925244    3913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:13:25.095460    3913 main.go:141] libmachine: Creating SSH key...
	I0919 12:13:25.165970    3913 main.go:141] libmachine: Creating Disk image...
	I0919 12:13:25.165975    3913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:13:25.166124    3913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2
	I0919 12:13:25.175445    3913 main.go:141] libmachine: STDOUT: 
	I0919 12:13:25.175468    3913 main.go:141] libmachine: STDERR: 
	I0919 12:13:25.175536    3913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2 +20000M
	I0919 12:13:25.183441    3913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:13:25.183456    3913 main.go:141] libmachine: STDERR: 
	I0919 12:13:25.183469    3913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2
	I0919 12:13:25.183473    3913 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:13:25.183484    3913 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:13:25.183524    3913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:f3:7b:aa:8d:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2
	I0919 12:13:25.185126    3913 main.go:141] libmachine: STDOUT: 
	I0919 12:13:25.185148    3913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:13:25.185161    3913 client.go:171] duration metric: took 260.8965ms to LocalClient.Create
	I0919 12:13:27.187320    3913 start.go:128] duration metric: took 2.332975041s to createHost
	I0919 12:13:27.187400    3913 start.go:83] releasing machines lock for "multinode-327000", held for 2.333508458s
	W0919 12:13:27.187867    3913 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-327000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-327000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:13:27.205670    3913 out.go:201] 
	W0919 12:13:27.209691    3913 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:13:27.209721    3913 out.go:270] * 
	* 
	W0919 12:13:27.212320    3913 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:13:27.223551    3913 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-327000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (67.934958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (119.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.235417ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-327000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- rollout status deployment/busybox: exit status 1 (58.580042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.875375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0919 12:13:27.555148    1618 retry.go:31] will retry after 693.856221ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.0285ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0919 12:13:28.356411    1618 retry.go:31] will retry after 1.413562028s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.57375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0919 12:13:29.873836    1618 retry.go:31] will retry after 3.113227227s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.436792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0919 12:13:33.091800    1618 retry.go:31] will retry after 2.356811667s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.866542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0919 12:13:35.556780    1618 retry.go:31] will retry after 4.90557227s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.650292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0919 12:13:40.570263    1618 retry.go:31] will retry after 6.50980392s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.90525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0919 12:13:47.188297    1618 retry.go:31] will retry after 8.360677752s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.421333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0919 12:13:55.656546    1618 retry.go:31] will retry after 12.631434688s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.43775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0919 12:14:08.395361    1618 retry.go:31] will retry after 20.358438458s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.290666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0919 12:14:28.860151    1618 retry.go:31] will retry after 19.859501238s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.936458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0919 12:14:48.826605    1618 retry.go:31] will retry after 37.352524547s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.586125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.05275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.764083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.609416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.895334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (30.412333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (119.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-327000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.891834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (30.570833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-327000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-327000 -v 3 --alsologtostderr: exit status 83 (46.145625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-327000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-327000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:15:26.658684    4012 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:15:26.658849    4012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:26.658852    4012 out.go:358] Setting ErrFile to fd 2...
	I0919 12:15:26.658855    4012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:26.658994    4012 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:15:26.659217    4012 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:15:26.659427    4012 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:15:26.664797    4012 out.go:177] * The control-plane node multinode-327000 host is not running: state=Stopped
	I0919 12:15:26.670717    4012 out.go:177]   To start a cluster, run: "minikube start -p multinode-327000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-327000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (30.537916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-327000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-327000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.35275ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-327000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-327000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-327000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (30.522959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-327000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-327000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-327000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-327000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (30.496125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status --output json --alsologtostderr: exit status 7 (30.3855ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-327000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:15:26.873247    4024 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:15:26.873409    4024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:26.873413    4024 out.go:358] Setting ErrFile to fd 2...
	I0919 12:15:26.873415    4024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:26.873542    4024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:15:26.873663    4024 out.go:352] Setting JSON to true
	I0919 12:15:26.873671    4024 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:15:26.873738    4024 notify.go:220] Checking for updates...
	I0919 12:15:26.873865    4024 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:15:26.873875    4024 status.go:174] checking status of multinode-327000 ...
	I0919 12:15:26.874128    4024 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:15:26.874131    4024 status.go:377] host is not running, skipping remaining checks
	I0919 12:15:26.874133    4024 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-327000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (30.147458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 node stop m03: exit status 85 (46.75725ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-327000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status: exit status 7 (29.712791ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status --alsologtostderr: exit status 7 (30.494708ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:15:27.011226    4032 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:15:27.011365    4032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:27.011369    4032 out.go:358] Setting ErrFile to fd 2...
	I0919 12:15:27.011371    4032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:27.011518    4032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:15:27.011644    4032 out.go:352] Setting JSON to false
	I0919 12:15:27.011653    4032 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:15:27.011709    4032 notify.go:220] Checking for updates...
	I0919 12:15:27.011845    4032 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:15:27.011854    4032 status.go:174] checking status of multinode-327000 ...
	I0919 12:15:27.012083    4032 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:15:27.012087    4032 status.go:377] host is not running, skipping remaining checks
	I0919 12:15:27.012089    4032 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-327000 status --alsologtostderr": multinode-327000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (30.501375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (52.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.800709ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:15:27.072494    4036 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:15:27.072755    4036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:27.072759    4036 out.go:358] Setting ErrFile to fd 2...
	I0919 12:15:27.072761    4036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:27.072894    4036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:15:27.073112    4036 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:15:27.073320    4036 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:15:27.077654    4036 out.go:201] 
	W0919 12:15:27.080657    4036 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0919 12:15:27.080663    4036 out.go:270] * 
	* 
	W0919 12:15:27.082372    4036 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:15:27.085653    4036 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0919 12:15:27.072494    4036 out.go:345] Setting OutFile to fd 1 ...
I0919 12:15:27.072755    4036 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 12:15:27.072759    4036 out.go:358] Setting ErrFile to fd 2...
I0919 12:15:27.072761    4036 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 12:15:27.072894    4036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
I0919 12:15:27.073112    4036 mustload.go:65] Loading cluster: multinode-327000
I0919 12:15:27.073320    4036 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 12:15:27.077654    4036 out.go:201] 
W0919 12:15:27.080657    4036 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0919 12:15:27.080663    4036 out.go:270] * 
* 
W0919 12:15:27.082372    4036 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0919 12:15:27.085653    4036 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-327000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr: exit status 7 (29.822834ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:15:27.117855    4038 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:15:27.118009    4038 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:27.118012    4038 out.go:358] Setting ErrFile to fd 2...
	I0919 12:15:27.118015    4038 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:27.118135    4038 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:15:27.118257    4038 out.go:352] Setting JSON to false
	I0919 12:15:27.118266    4038 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:15:27.118320    4038 notify.go:220] Checking for updates...
	I0919 12:15:27.118475    4038 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:15:27.118485    4038 status.go:174] checking status of multinode-327000 ...
	I0919 12:15:27.118715    4038 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:15:27.118719    4038 status.go:377] host is not running, skipping remaining checks
	I0919 12:15:27.118721    4038 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 12:15:27.119611    1618 retry.go:31] will retry after 913.759481ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr: exit status 7 (73.772125ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:15:28.107053    4040 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:15:28.107268    4040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:28.107273    4040 out.go:358] Setting ErrFile to fd 2...
	I0919 12:15:28.107277    4040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:28.107483    4040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:15:28.107655    4040 out.go:352] Setting JSON to false
	I0919 12:15:28.107669    4040 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:15:28.107740    4040 notify.go:220] Checking for updates...
	I0919 12:15:28.108001    4040 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:15:28.108014    4040 status.go:174] checking status of multinode-327000 ...
	I0919 12:15:28.108365    4040 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:15:28.108370    4040 status.go:377] host is not running, skipping remaining checks
	I0919 12:15:28.108373    4040 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 12:15:28.109479    1618 retry.go:31] will retry after 1.447154781s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr: exit status 7 (74.523916ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:15:29.631265    4042 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:15:29.631471    4042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:29.631476    4042 out.go:358] Setting ErrFile to fd 2...
	I0919 12:15:29.631480    4042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:29.631702    4042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:15:29.631875    4042 out.go:352] Setting JSON to false
	I0919 12:15:29.631889    4042 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:15:29.631938    4042 notify.go:220] Checking for updates...
	I0919 12:15:29.632210    4042 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:15:29.632225    4042 status.go:174] checking status of multinode-327000 ...
	I0919 12:15:29.632561    4042 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:15:29.632566    4042 status.go:377] host is not running, skipping remaining checks
	I0919 12:15:29.632569    4042 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 12:15:29.633661    1618 retry.go:31] will retry after 3.220215259s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr: exit status 7 (72.932542ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:15:32.926847    4046 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:15:32.927047    4046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:32.927052    4046 out.go:358] Setting ErrFile to fd 2...
	I0919 12:15:32.927056    4046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:32.927275    4046 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:15:32.927463    4046 out.go:352] Setting JSON to false
	I0919 12:15:32.927476    4046 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:15:32.927523    4046 notify.go:220] Checking for updates...
	I0919 12:15:32.927773    4046 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:15:32.927785    4046 status.go:174] checking status of multinode-327000 ...
	I0919 12:15:32.928102    4046 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:15:32.928107    4046 status.go:377] host is not running, skipping remaining checks
	I0919 12:15:32.928109    4046 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 12:15:32.929205    1618 retry.go:31] will retry after 4.251081232s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr: exit status 7 (73.539917ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:15:37.254024    4048 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:15:37.254206    4048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:37.254211    4048 out.go:358] Setting ErrFile to fd 2...
	I0919 12:15:37.254214    4048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:37.254369    4048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:15:37.254527    4048 out.go:352] Setting JSON to false
	I0919 12:15:37.254539    4048 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:15:37.254575    4048 notify.go:220] Checking for updates...
	I0919 12:15:37.254809    4048 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:15:37.254823    4048 status.go:174] checking status of multinode-327000 ...
	I0919 12:15:37.255135    4048 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:15:37.255140    4048 status.go:377] host is not running, skipping remaining checks
	I0919 12:15:37.255143    4048 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 12:15:37.256204    1618 retry.go:31] will retry after 5.298318462s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr: exit status 7 (66.399458ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:15:42.620770    4055 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:15:42.620962    4055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:42.620968    4055 out.go:358] Setting ErrFile to fd 2...
	I0919 12:15:42.620971    4055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:42.621152    4055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:15:42.621333    4055 out.go:352] Setting JSON to false
	I0919 12:15:42.621356    4055 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:15:42.621405    4055 notify.go:220] Checking for updates...
	I0919 12:15:42.621657    4055 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:15:42.621671    4055 status.go:174] checking status of multinode-327000 ...
	I0919 12:15:42.622023    4055 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:15:42.622028    4055 status.go:377] host is not running, skipping remaining checks
	I0919 12:15:42.622031    4055 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 12:15:42.623227    1618 retry.go:31] will retry after 8.096129244s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr: exit status 7 (73.077ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:15:50.792501    4062 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:15:50.792725    4062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:50.792730    4062 out.go:358] Setting ErrFile to fd 2...
	I0919 12:15:50.792733    4062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:50.792883    4062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:15:50.793047    4062 out.go:352] Setting JSON to false
	I0919 12:15:50.793058    4062 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:15:50.793094    4062 notify.go:220] Checking for updates...
	I0919 12:15:50.793328    4062 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:15:50.793339    4062 status.go:174] checking status of multinode-327000 ...
	I0919 12:15:50.793640    4062 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:15:50.793645    4062 status.go:377] host is not running, skipping remaining checks
	I0919 12:15:50.793648    4062 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 12:15:50.794698    1618 retry.go:31] will retry after 7.777069847s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr: exit status 7 (74.396542ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:15:58.644538    4064 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:15:58.644737    4064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:58.644741    4064 out.go:358] Setting ErrFile to fd 2...
	I0919 12:15:58.644744    4064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:15:58.644949    4064 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:15:58.645102    4064 out.go:352] Setting JSON to false
	I0919 12:15:58.645114    4064 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:15:58.645152    4064 notify.go:220] Checking for updates...
	I0919 12:15:58.645367    4064 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:15:58.645378    4064 status.go:174] checking status of multinode-327000 ...
	I0919 12:15:58.645688    4064 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:15:58.645693    4064 status.go:377] host is not running, skipping remaining checks
	I0919 12:15:58.645696    4064 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 12:15:58.646712    1618 retry.go:31] will retry after 21.24989515s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr: exit status 7 (73.57975ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:16:19.970123    4068 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:16:19.970309    4068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:16:19.970314    4068 out.go:358] Setting ErrFile to fd 2...
	I0919 12:16:19.970318    4068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:16:19.970477    4068 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:16:19.970633    4068 out.go:352] Setting JSON to false
	I0919 12:16:19.970645    4068 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:16:19.970687    4068 notify.go:220] Checking for updates...
	I0919 12:16:19.970895    4068 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:16:19.970908    4068 status.go:174] checking status of multinode-327000 ...
	I0919 12:16:19.971204    4068 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:16:19.971209    4068 status.go:377] host is not running, skipping remaining checks
	I0919 12:16:19.971211    4068 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-327000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (32.625083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (52.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-327000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-327000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-327000: (3.21663825s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-327000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-327000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.227526834s)

                                                
                                                
-- stdout --
	* [multinode-327000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-327000" primary control-plane node in "multinode-327000" cluster
	* Restarting existing qemu2 VM for "multinode-327000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-327000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:16:23.317208    4092 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:16:23.317392    4092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:16:23.317397    4092 out.go:358] Setting ErrFile to fd 2...
	I0919 12:16:23.317400    4092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:16:23.317563    4092 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:16:23.318732    4092 out.go:352] Setting JSON to false
	I0919 12:16:23.338132    4092 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2748,"bootTime":1726770635,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:16:23.338204    4092 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:16:23.343673    4092 out.go:177] * [multinode-327000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:16:23.350736    4092 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:16:23.350791    4092 notify.go:220] Checking for updates...
	I0919 12:16:23.357675    4092 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:16:23.360666    4092 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:16:23.363698    4092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:16:23.366717    4092 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:16:23.369675    4092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:16:23.373013    4092 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:16:23.373071    4092 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:16:23.377664    4092 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:16:23.384714    4092 start.go:297] selected driver: qemu2
	I0919 12:16:23.384723    4092 start.go:901] validating driver "qemu2" against &{Name:multinode-327000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-327000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:16:23.384793    4092 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:16:23.387329    4092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:16:23.387354    4092 cni.go:84] Creating CNI manager for ""
	I0919 12:16:23.387390    4092 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 12:16:23.387450    4092 start.go:340] cluster config:
	{Name:multinode-327000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-327000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:16:23.391350    4092 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:16:23.398662    4092 out.go:177] * Starting "multinode-327000" primary control-plane node in "multinode-327000" cluster
	I0919 12:16:23.402651    4092 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:16:23.402665    4092 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:16:23.402671    4092 cache.go:56] Caching tarball of preloaded images
	I0919 12:16:23.402725    4092 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:16:23.402731    4092 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:16:23.402790    4092 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/multinode-327000/config.json ...
	I0919 12:16:23.403239    4092 start.go:360] acquireMachinesLock for multinode-327000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:16:23.403276    4092 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "multinode-327000"
	I0919 12:16:23.403285    4092 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:16:23.403292    4092 fix.go:54] fixHost starting: 
	I0919 12:16:23.403413    4092 fix.go:112] recreateIfNeeded on multinode-327000: state=Stopped err=<nil>
	W0919 12:16:23.403425    4092 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:16:23.411619    4092 out.go:177] * Restarting existing qemu2 VM for "multinode-327000" ...
	I0919 12:16:23.415679    4092 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:16:23.415717    4092 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:f3:7b:aa:8d:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2
	I0919 12:16:23.417798    4092 main.go:141] libmachine: STDOUT: 
	I0919 12:16:23.417817    4092 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:16:23.417848    4092 fix.go:56] duration metric: took 14.557542ms for fixHost
	I0919 12:16:23.417854    4092 start.go:83] releasing machines lock for "multinode-327000", held for 14.5735ms
	W0919 12:16:23.417860    4092 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:16:23.417900    4092 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:16:23.417905    4092 start.go:729] Will try again in 5 seconds ...
	I0919 12:16:28.420000    4092 start.go:360] acquireMachinesLock for multinode-327000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:16:28.420518    4092 start.go:364] duration metric: took 357.542µs to acquireMachinesLock for "multinode-327000"
	I0919 12:16:28.420724    4092 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:16:28.420745    4092 fix.go:54] fixHost starting: 
	I0919 12:16:28.421555    4092 fix.go:112] recreateIfNeeded on multinode-327000: state=Stopped err=<nil>
	W0919 12:16:28.421586    4092 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:16:28.426145    4092 out.go:177] * Restarting existing qemu2 VM for "multinode-327000" ...
	I0919 12:16:28.434056    4092 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:16:28.434286    4092 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:f3:7b:aa:8d:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2
	I0919 12:16:28.443753    4092 main.go:141] libmachine: STDOUT: 
	I0919 12:16:28.443810    4092 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:16:28.443920    4092 fix.go:56] duration metric: took 23.176ms for fixHost
	I0919 12:16:28.443940    4092 start.go:83] releasing machines lock for "multinode-327000", held for 23.369125ms
	W0919 12:16:28.444099    4092 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-327000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-327000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:16:28.452096    4092 out.go:201] 
	W0919 12:16:28.456183    4092 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:16:28.456218    4092 out.go:270] * 
	* 
	W0919 12:16:28.458861    4092 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:16:28.466022    4092 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-327000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-327000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (33.367583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 node delete m03: exit status 83 (40.131583ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-327000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-327000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-327000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status --alsologtostderr: exit status 7 (29.315959ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:16:28.648808    4110 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:16:28.648962    4110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:16:28.648965    4110 out.go:358] Setting ErrFile to fd 2...
	I0919 12:16:28.648968    4110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:16:28.649116    4110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:16:28.649234    4110 out.go:352] Setting JSON to false
	I0919 12:16:28.649242    4110 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:16:28.649310    4110 notify.go:220] Checking for updates...
	I0919 12:16:28.649439    4110 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:16:28.649449    4110 status.go:174] checking status of multinode-327000 ...
	I0919 12:16:28.649686    4110 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:16:28.649690    4110 status.go:377] host is not running, skipping remaining checks
	I0919 12:16:28.649692    4110 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-327000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (30.456416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-327000 stop: (2.9402485s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status: exit status 7 (64.281292ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-327000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-327000 status --alsologtostderr: exit status 7 (32.7855ms)

                                                
                                                
-- stdout --
	multinode-327000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:16:31.717133    4134 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:16:31.717275    4134 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:16:31.717279    4134 out.go:358] Setting ErrFile to fd 2...
	I0919 12:16:31.717281    4134 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:16:31.717422    4134 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:16:31.717545    4134 out.go:352] Setting JSON to false
	I0919 12:16:31.717564    4134 mustload.go:65] Loading cluster: multinode-327000
	I0919 12:16:31.717599    4134 notify.go:220] Checking for updates...
	I0919 12:16:31.717788    4134 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:16:31.717797    4134 status.go:174] checking status of multinode-327000 ...
	I0919 12:16:31.718056    4134 status.go:364] multinode-327000 host status = "Stopped" (err=<nil>)
	I0919 12:16:31.718060    4134 status.go:377] host is not running, skipping remaining checks
	I0919 12:16:31.718062    4134 status.go:176] multinode-327000 status: &{Name:multinode-327000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-327000 status --alsologtostderr": multinode-327000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-327000 status --alsologtostderr": multinode-327000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (30.296125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-327000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-327000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.171829125s)

                                                
                                                
-- stdout --
	* [multinode-327000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-327000" primary control-plane node in "multinode-327000" cluster
	* Restarting existing qemu2 VM for "multinode-327000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-327000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:16:31.777603    4138 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:16:31.777744    4138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:16:31.777747    4138 out.go:358] Setting ErrFile to fd 2...
	I0919 12:16:31.777750    4138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:16:31.777868    4138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:16:31.778876    4138 out.go:352] Setting JSON to false
	I0919 12:16:31.795055    4138 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2756,"bootTime":1726770635,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:16:31.795117    4138 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:16:31.799892    4138 out.go:177] * [multinode-327000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:16:31.805756    4138 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:16:31.805800    4138 notify.go:220] Checking for updates...
	I0919 12:16:31.812848    4138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:16:31.815819    4138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:16:31.818846    4138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:16:31.821845    4138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:16:31.823165    4138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:16:31.826091    4138 config.go:182] Loaded profile config "multinode-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:16:31.826359    4138 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:16:31.830883    4138 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:16:31.832344    4138 start.go:297] selected driver: qemu2
	I0919 12:16:31.832350    4138 start.go:901] validating driver "qemu2" against &{Name:multinode-327000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-327000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:16:31.832415    4138 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:16:31.834418    4138 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:16:31.834440    4138 cni.go:84] Creating CNI manager for ""
	I0919 12:16:31.834458    4138 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 12:16:31.834494    4138 start.go:340] cluster config:
	{Name:multinode-327000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-327000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:16:31.837970    4138 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:16:31.844843    4138 out.go:177] * Starting "multinode-327000" primary control-plane node in "multinode-327000" cluster
	I0919 12:16:31.848777    4138 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:16:31.848794    4138 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:16:31.848808    4138 cache.go:56] Caching tarball of preloaded images
	I0919 12:16:31.848859    4138 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:16:31.848865    4138 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:16:31.848936    4138 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/multinode-327000/config.json ...
	I0919 12:16:31.849396    4138 start.go:360] acquireMachinesLock for multinode-327000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:16:31.849424    4138 start.go:364] duration metric: took 22.25µs to acquireMachinesLock for "multinode-327000"
	I0919 12:16:31.849432    4138 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:16:31.849440    4138 fix.go:54] fixHost starting: 
	I0919 12:16:31.849566    4138 fix.go:112] recreateIfNeeded on multinode-327000: state=Stopped err=<nil>
	W0919 12:16:31.849573    4138 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:16:31.855772    4138 out.go:177] * Restarting existing qemu2 VM for "multinode-327000" ...
	I0919 12:16:31.859896    4138 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:16:31.859930    4138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:f3:7b:aa:8d:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2
	I0919 12:16:31.862239    4138 main.go:141] libmachine: STDOUT: 
	I0919 12:16:31.862259    4138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:16:31.862292    4138 fix.go:56] duration metric: took 12.852583ms for fixHost
	I0919 12:16:31.862296    4138 start.go:83] releasing machines lock for "multinode-327000", held for 12.868042ms
	W0919 12:16:31.862309    4138 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:16:31.862350    4138 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:16:31.862355    4138 start.go:729] Will try again in 5 seconds ...
	I0919 12:16:36.864418    4138 start.go:360] acquireMachinesLock for multinode-327000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:16:36.864788    4138 start.go:364] duration metric: took 283.5µs to acquireMachinesLock for "multinode-327000"
	I0919 12:16:36.864914    4138 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:16:36.864932    4138 fix.go:54] fixHost starting: 
	I0919 12:16:36.865628    4138 fix.go:112] recreateIfNeeded on multinode-327000: state=Stopped err=<nil>
	W0919 12:16:36.865654    4138 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:16:36.870106    4138 out.go:177] * Restarting existing qemu2 VM for "multinode-327000" ...
	I0919 12:16:36.877072    4138 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:16:36.877283    4138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:f3:7b:aa:8d:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/multinode-327000/disk.qcow2
	I0919 12:16:36.886031    4138 main.go:141] libmachine: STDOUT: 
	I0919 12:16:36.886236    4138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:16:36.886293    4138 fix.go:56] duration metric: took 21.362666ms for fixHost
	I0919 12:16:36.886309    4138 start.go:83] releasing machines lock for "multinode-327000", held for 21.49875ms
	W0919 12:16:36.886471    4138 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-327000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-327000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:16:36.894030    4138 out.go:201] 
	W0919 12:16:36.898119    4138 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:16:36.898143    4138 out.go:270] * 
	* 
	W0919 12:16:36.900518    4138 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:16:36.908043    4138 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-327000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (68.786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-327000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-327000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-327000-m01 --driver=qemu2 : exit status 80 (10.00229025s)

                                                
                                                
-- stdout --
	* [multinode-327000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-327000-m01" primary control-plane node in "multinode-327000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-327000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-327000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-327000-m02 --driver=qemu2 
E0919 12:16:47.803266    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:16:56.057961    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-327000-m02 --driver=qemu2 : exit status 80 (10.112301041s)

                                                
                                                
-- stdout --
	* [multinode-327000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-327000-m02" primary control-plane node in "multinode-327000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-327000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-327000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-327000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-327000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-327000: exit status 83 (80.998708ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-327000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-327000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-327000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-327000 -n multinode-327000: exit status 7 (31.234833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.35s)

                                                
                                    
x
+
TestPreload (10.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-406000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-406000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.9858335s)

                                                
                                                
-- stdout --
	* [test-preload-406000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-406000" primary control-plane node in "test-preload-406000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-406000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:16:57.480812    4195 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:16:57.480939    4195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:16:57.480942    4195 out.go:358] Setting ErrFile to fd 2...
	I0919 12:16:57.480944    4195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:16:57.481060    4195 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:16:57.482121    4195 out.go:352] Setting JSON to false
	I0919 12:16:57.498071    4195 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2782,"bootTime":1726770635,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:16:57.498143    4195 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:16:57.504596    4195 out.go:177] * [test-preload-406000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:16:57.512622    4195 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:16:57.512677    4195 notify.go:220] Checking for updates...
	I0919 12:16:57.520567    4195 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:16:57.523590    4195 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:16:57.526514    4195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:16:57.529555    4195 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:16:57.532613    4195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:16:57.535875    4195 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:16:57.535924    4195 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:16:57.540538    4195 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:16:57.547628    4195 start.go:297] selected driver: qemu2
	I0919 12:16:57.547639    4195 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:16:57.547647    4195 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:16:57.549940    4195 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:16:57.553536    4195 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:16:57.556601    4195 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:16:57.556620    4195 cni.go:84] Creating CNI manager for ""
	I0919 12:16:57.556644    4195 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:16:57.556649    4195 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 12:16:57.556674    4195 start.go:340] cluster config:
	{Name:test-preload-406000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:16:57.560325    4195 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:16:57.567581    4195 out.go:177] * Starting "test-preload-406000" primary control-plane node in "test-preload-406000" cluster
	I0919 12:16:57.571417    4195 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0919 12:16:57.571528    4195 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/test-preload-406000/config.json ...
	I0919 12:16:57.571544    4195 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/test-preload-406000/config.json: {Name:mkd3d7cc8a57c2fd39867493a3ebcb796a005256 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:16:57.571562    4195 cache.go:107] acquiring lock: {Name:mk0d52bfac5dde9c7e687238a9468f2217281522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:16:57.571562    4195 cache.go:107] acquiring lock: {Name:mkdb28a57a3be911ee65a0ee5935917447689a7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:16:57.571567    4195 cache.go:107] acquiring lock: {Name:mkb509aa54ceb34db924c916348a5d067ce8d765 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:16:57.571600    4195 cache.go:107] acquiring lock: {Name:mk950456a6c8413f9a8b88d841e449ddcb032ad0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:16:57.571741    4195 cache.go:107] acquiring lock: {Name:mkcfa921bd4fce9d310b94f4d860e339eb282ee7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:16:57.571849    4195 cache.go:107] acquiring lock: {Name:mk653b5458ff079c9b40b0b41ac00593a71ff07f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:16:57.571862    4195 cache.go:107] acquiring lock: {Name:mk1391ddf7a6fde6a63d5d45e1f0c740e822c36e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:16:57.571867    4195 cache.go:107] acquiring lock: {Name:mkfa709467e20554b5a82e4a79940fc75b6980ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:16:57.572020    4195 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:16:57.572035    4195 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0919 12:16:57.572104    4195 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0919 12:16:57.572116    4195 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0919 12:16:57.572128    4195 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:16:57.572151    4195 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0919 12:16:57.572136    4195 start.go:360] acquireMachinesLock for test-preload-406000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:16:57.572256    4195 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 12:16:57.572261    4195 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0919 12:16:57.572308    4195 start.go:364] duration metric: took 95.25µs to acquireMachinesLock for "test-preload-406000"
	I0919 12:16:57.572320    4195 start.go:93] Provisioning new machine with config: &{Name:test-preload-406000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:16:57.572347    4195 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:16:57.579490    4195 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:16:57.583508    4195 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:16:57.586278    4195 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0919 12:16:57.586312    4195 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 12:16:57.586315    4195 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0919 12:16:57.586325    4195 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0919 12:16:57.586279    4195 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0919 12:16:57.586360    4195 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:16:57.586366    4195 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0919 12:16:57.598103    4195 start.go:159] libmachine.API.Create for "test-preload-406000" (driver="qemu2")
	I0919 12:16:57.598125    4195 client.go:168] LocalClient.Create starting
	I0919 12:16:57.598189    4195 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:16:57.598218    4195 main.go:141] libmachine: Decoding PEM data...
	I0919 12:16:57.598226    4195 main.go:141] libmachine: Parsing certificate...
	I0919 12:16:57.598259    4195 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:16:57.598282    4195 main.go:141] libmachine: Decoding PEM data...
	I0919 12:16:57.598291    4195 main.go:141] libmachine: Parsing certificate...
	I0919 12:16:57.598615    4195 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:16:57.758811    4195 main.go:141] libmachine: Creating SSH key...
	I0919 12:16:57.852712    4195 main.go:141] libmachine: Creating Disk image...
	I0919 12:16:57.852735    4195 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:16:57.852933    4195 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/disk.qcow2
	I0919 12:16:57.862802    4195 main.go:141] libmachine: STDOUT: 
	I0919 12:16:57.862824    4195 main.go:141] libmachine: STDERR: 
	I0919 12:16:57.862889    4195 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/disk.qcow2 +20000M
	I0919 12:16:57.872156    4195 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:16:57.872186    4195 main.go:141] libmachine: STDERR: 
	I0919 12:16:57.872209    4195 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/disk.qcow2
	I0919 12:16:57.872214    4195 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:16:57.872228    4195 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:16:57.872253    4195 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:19:7e:de:d4:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/disk.qcow2
	I0919 12:16:57.874361    4195 main.go:141] libmachine: STDOUT: 
	I0919 12:16:57.874381    4195 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:16:57.874408    4195 client.go:171] duration metric: took 276.284792ms to LocalClient.Create
	I0919 12:16:58.086306    4195 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0919 12:16:58.104076    4195 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0919 12:16:58.118379    4195 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0919 12:16:58.119528    4195 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0919 12:16:58.119561    4195 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0919 12:16:58.134135    4195 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0919 12:16:58.150510    4195 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0919 12:16:58.187802    4195 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0919 12:16:58.293019    4195 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0919 12:16:58.293069    4195 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 721.316583ms
	I0919 12:16:58.293129    4195 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0919 12:16:58.661347    4195 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0919 12:16:58.661438    4195 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0919 12:16:59.369313    4195 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0919 12:16:59.369365    4195 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.797850708s
	I0919 12:16:59.369379    4195 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0919 12:16:59.874641    4195 start.go:128] duration metric: took 2.302237625s to createHost
	I0919 12:16:59.874702    4195 start.go:83] releasing machines lock for "test-preload-406000", held for 2.302446042s
	W0919 12:16:59.874743    4195 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:16:59.887950    4195 out.go:177] * Deleting "test-preload-406000" in qemu2 ...
	W0919 12:16:59.919661    4195 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:16:59.919683    4195 start.go:729] Will try again in 5 seconds ...
	I0919 12:17:00.055222    4195 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0919 12:17:00.055294    4195 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.483627791s
	I0919 12:17:00.055358    4195 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0919 12:17:00.514417    4195 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0919 12:17:00.514479    4195 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.942739542s
	I0919 12:17:00.514508    4195 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0919 12:17:02.746645    4195 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0919 12:17:02.746704    4195 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.175227292s
	I0919 12:17:02.746728    4195 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0919 12:17:02.797567    4195 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0919 12:17:02.797609    4195 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.2259915s
	I0919 12:17:02.797640    4195 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0919 12:17:03.816884    4195 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0919 12:17:03.816940    4195 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.245548625s
	I0919 12:17:03.816973    4195 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0919 12:17:04.919778    4195 start.go:360] acquireMachinesLock for test-preload-406000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:17:04.920242    4195 start.go:364] duration metric: took 387.084µs to acquireMachinesLock for "test-preload-406000"
	I0919 12:17:04.920373    4195 start.go:93] Provisioning new machine with config: &{Name:test-preload-406000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:17:04.920575    4195 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:17:04.942823    4195 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:17:04.992074    4195 start.go:159] libmachine.API.Create for "test-preload-406000" (driver="qemu2")
	I0919 12:17:04.992131    4195 client.go:168] LocalClient.Create starting
	I0919 12:17:04.992258    4195 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:17:04.992323    4195 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:04.992344    4195 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:04.992417    4195 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:17:04.992464    4195 main.go:141] libmachine: Decoding PEM data...
	I0919 12:17:04.992487    4195 main.go:141] libmachine: Parsing certificate...
	I0919 12:17:04.992979    4195 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:17:05.162912    4195 main.go:141] libmachine: Creating SSH key...
	I0919 12:17:05.360748    4195 main.go:141] libmachine: Creating Disk image...
	I0919 12:17:05.360759    4195 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:17:05.360971    4195 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/disk.qcow2
	I0919 12:17:05.370716    4195 main.go:141] libmachine: STDOUT: 
	I0919 12:17:05.370731    4195 main.go:141] libmachine: STDERR: 
	I0919 12:17:05.370792    4195 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/disk.qcow2 +20000M
	I0919 12:17:05.378875    4195 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:17:05.378900    4195 main.go:141] libmachine: STDERR: 
	I0919 12:17:05.378915    4195 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/disk.qcow2
	I0919 12:17:05.378920    4195 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:17:05.378926    4195 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:17:05.378964    4195 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:6d:2f:0b:42:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/test-preload-406000/disk.qcow2
	I0919 12:17:05.380724    4195 main.go:141] libmachine: STDOUT: 
	I0919 12:17:05.380740    4195 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:17:05.380755    4195 client.go:171] duration metric: took 388.629459ms to LocalClient.Create
	I0919 12:17:07.381871    4195 start.go:128] duration metric: took 2.461326959s to createHost
	I0919 12:17:07.381937    4195 start.go:83] releasing machines lock for "test-preload-406000", held for 2.461734625s
	W0919 12:17:07.382165    4195 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-406000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-406000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:17:07.398890    4195 out.go:201] 
	W0919 12:17:07.402858    4195 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:17:07.402883    4195 out.go:270] * 
	* 
	W0919 12:17:07.405762    4195 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:17:07.422817    4195 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-406000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-19 12:17:07.440476 -0700 PDT m=+2333.960961959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-406000 -n test-preload-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-406000 -n test-preload-406000: exit status 7 (67.279125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-406000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-406000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-406000
--- FAIL: TestPreload (10.14s)

                                                
                                    
x
+
TestScheduledStopUnix (10.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-700000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-700000 --memory=2048 --driver=qemu2 : exit status 80 (9.854411792s)

                                                
                                                
-- stdout --
	* [scheduled-stop-700000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-700000" primary control-plane node in "scheduled-stop-700000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-700000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-700000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-700000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-700000" primary control-plane node in "scheduled-stop-700000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-700000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-700000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-19 12:17:17.44655 -0700 PDT m=+2343.967308584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-700000 -n scheduled-stop-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-700000 -n scheduled-stop-700000: exit status 7 (70.668458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-700000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-700000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-700000
--- FAIL: TestScheduledStopUnix (10.01s)

                                                
                                    
x
+
TestSkaffold (12.45s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1071148007 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1071148007 version: (1.062929s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-420000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-420000 --memory=2600 --driver=qemu2 : exit status 80 (9.969367458s)

                                                
                                                
-- stdout --
	* [skaffold-420000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-420000" primary control-plane node in "skaffold-420000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-420000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-420000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-420000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-420000" primary control-plane node in "skaffold-420000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-420000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-420000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-19 12:17:29.902241 -0700 PDT m=+2356.423340209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-420000 -n skaffold-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-420000 -n skaffold-420000: exit status 7 (62.909959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-420000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-420000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-420000
--- FAIL: TestSkaffold (12.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (590.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.4181221315 start -p running-upgrade-356000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.4181221315 start -p running-upgrade-356000 --memory=2200 --vm-driver=qemu2 : (54.04819775s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-356000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-356000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m21.219976792s)

                                                
                                                
-- stdout --
	* [running-upgrade-356000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-356000" primary control-plane node in "running-upgrade-356000" cluster
	* Updating the running qemu2 "running-upgrade-356000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:19:07.559215    4610 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:19:07.559349    4610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:19:07.559357    4610 out.go:358] Setting ErrFile to fd 2...
	I0919 12:19:07.559359    4610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:19:07.559477    4610 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:19:07.560615    4610 out.go:352] Setting JSON to false
	I0919 12:19:07.577021    4610 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2912,"bootTime":1726770635,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:19:07.577082    4610 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:19:07.582786    4610 out.go:177] * [running-upgrade-356000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:19:07.589782    4610 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:19:07.589841    4610 notify.go:220] Checking for updates...
	I0919 12:19:07.597685    4610 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:19:07.600743    4610 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:19:07.603778    4610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:19:07.606688    4610 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:19:07.609788    4610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:19:07.613087    4610 config.go:182] Loaded profile config "running-upgrade-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:19:07.616674    4610 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0919 12:19:07.619739    4610 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:19:07.623745    4610 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:19:07.630704    4610 start.go:297] selected driver: qemu2
	I0919 12:19:07.630710    4610 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50300 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0919 12:19:07.630758    4610 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:19:07.633174    4610 cni.go:84] Creating CNI manager for ""
	I0919 12:19:07.633211    4610 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:19:07.633240    4610 start.go:340] cluster config:
	{Name:running-upgrade-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50300 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0919 12:19:07.633293    4610 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:19:07.640651    4610 out.go:177] * Starting "running-upgrade-356000" primary control-plane node in "running-upgrade-356000" cluster
	I0919 12:19:07.644734    4610 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0919 12:19:07.644747    4610 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0919 12:19:07.644753    4610 cache.go:56] Caching tarball of preloaded images
	I0919 12:19:07.644810    4610 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:19:07.644815    4610 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0919 12:19:07.644863    4610 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/config.json ...
	I0919 12:19:07.645290    4610 start.go:360] acquireMachinesLock for running-upgrade-356000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:19:07.645322    4610 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "running-upgrade-356000"
	I0919 12:19:07.645329    4610 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:19:07.645337    4610 fix.go:54] fixHost starting: 
	I0919 12:19:07.645972    4610 fix.go:112] recreateIfNeeded on running-upgrade-356000: state=Running err=<nil>
	W0919 12:19:07.645981    4610 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:19:07.649717    4610 out.go:177] * Updating the running qemu2 "running-upgrade-356000" VM ...
	I0919 12:19:07.657741    4610 machine.go:93] provisionDockerMachine start ...
	I0919 12:19:07.657777    4610 main.go:141] libmachine: Using SSH client type: native
	I0919 12:19:07.657873    4610 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fcd190] 0x100fcf9d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0919 12:19:07.657878    4610 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 12:19:07.722843    4610 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-356000
	
	I0919 12:19:07.722855    4610 buildroot.go:166] provisioning hostname "running-upgrade-356000"
	I0919 12:19:07.722908    4610 main.go:141] libmachine: Using SSH client type: native
	I0919 12:19:07.723018    4610 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fcd190] 0x100fcf9d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0919 12:19:07.723024    4610 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-356000 && echo "running-upgrade-356000" | sudo tee /etc/hostname
	I0919 12:19:07.790837    4610 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-356000
	
	I0919 12:19:07.790893    4610 main.go:141] libmachine: Using SSH client type: native
	I0919 12:19:07.791002    4610 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fcd190] 0x100fcf9d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0919 12:19:07.791012    4610 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-356000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-356000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-356000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 12:19:07.853978    4610 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 12:19:07.853989    4610 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19664-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19664-1099/.minikube}
	I0919 12:19:07.853997    4610 buildroot.go:174] setting up certificates
	I0919 12:19:07.854001    4610 provision.go:84] configureAuth start
	I0919 12:19:07.854009    4610 provision.go:143] copyHostCerts
	I0919 12:19:07.854086    4610 exec_runner.go:144] found /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.pem, removing ...
	I0919 12:19:07.854092    4610 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.pem
	I0919 12:19:07.854224    4610 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.pem (1078 bytes)
	I0919 12:19:07.854412    4610 exec_runner.go:144] found /Users/jenkins/minikube-integration/19664-1099/.minikube/cert.pem, removing ...
	I0919 12:19:07.854416    4610 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19664-1099/.minikube/cert.pem
	I0919 12:19:07.854470    4610 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19664-1099/.minikube/cert.pem (1123 bytes)
	I0919 12:19:07.854566    4610 exec_runner.go:144] found /Users/jenkins/minikube-integration/19664-1099/.minikube/key.pem, removing ...
	I0919 12:19:07.854571    4610 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19664-1099/.minikube/key.pem
	I0919 12:19:07.854615    4610 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19664-1099/.minikube/key.pem (1679 bytes)
	I0919 12:19:07.854700    4610 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-356000 san=[127.0.0.1 localhost minikube running-upgrade-356000]
	I0919 12:19:07.976787    4610 provision.go:177] copyRemoteCerts
	I0919 12:19:07.976840    4610 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 12:19:07.976849    4610 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/running-upgrade-356000/id_rsa Username:docker}
	I0919 12:19:08.010306    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0919 12:19:08.017561    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 12:19:08.024746    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 12:19:08.031113    4610 provision.go:87] duration metric: took 177.105167ms to configureAuth
	I0919 12:19:08.031124    4610 buildroot.go:189] setting minikube options for container-runtime
	I0919 12:19:08.031237    4610 config.go:182] Loaded profile config "running-upgrade-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:19:08.031275    4610 main.go:141] libmachine: Using SSH client type: native
	I0919 12:19:08.031366    4610 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fcd190] 0x100fcf9d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0919 12:19:08.031374    4610 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 12:19:08.098142    4610 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 12:19:08.098153    4610 buildroot.go:70] root file system type: tmpfs
	I0919 12:19:08.098207    4610 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 12:19:08.098260    4610 main.go:141] libmachine: Using SSH client type: native
	I0919 12:19:08.098383    4610 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fcd190] 0x100fcf9d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0919 12:19:08.098417    4610 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 12:19:08.165965    4610 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 12:19:08.166030    4610 main.go:141] libmachine: Using SSH client type: native
	I0919 12:19:08.166147    4610 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fcd190] 0x100fcf9d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0919 12:19:08.166158    4610 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 12:19:08.232340    4610 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 12:19:08.232352    4610 machine.go:96] duration metric: took 574.621084ms to provisionDockerMachine
	I0919 12:19:08.232361    4610 start.go:293] postStartSetup for "running-upgrade-356000" (driver="qemu2")
	I0919 12:19:08.232369    4610 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 12:19:08.232429    4610 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 12:19:08.232438    4610 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/running-upgrade-356000/id_rsa Username:docker}
	I0919 12:19:08.268421    4610 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 12:19:08.269662    4610 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 12:19:08.269669    4610 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19664-1099/.minikube/addons for local assets ...
	I0919 12:19:08.269744    4610 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19664-1099/.minikube/files for local assets ...
	I0919 12:19:08.269876    4610 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/ssl/certs/16182.pem -> 16182.pem in /etc/ssl/certs
	I0919 12:19:08.270006    4610 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 12:19:08.272813    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/ssl/certs/16182.pem --> /etc/ssl/certs/16182.pem (1708 bytes)
	I0919 12:19:08.279667    4610 start.go:296] duration metric: took 47.302833ms for postStartSetup
	I0919 12:19:08.279679    4610 fix.go:56] duration metric: took 634.363084ms for fixHost
	I0919 12:19:08.279721    4610 main.go:141] libmachine: Using SSH client type: native
	I0919 12:19:08.279819    4610 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fcd190] 0x100fcf9d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0919 12:19:08.279828    4610 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 12:19:08.341166    4610 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726773548.794127221
	
	I0919 12:19:08.341177    4610 fix.go:216] guest clock: 1726773548.794127221
	I0919 12:19:08.341181    4610 fix.go:229] Guest: 2024-09-19 12:19:08.794127221 -0700 PDT Remote: 2024-09-19 12:19:08.27968 -0700 PDT m=+0.740270710 (delta=514.447221ms)
	I0919 12:19:08.341192    4610 fix.go:200] guest clock delta is within tolerance: 514.447221ms
	I0919 12:19:08.341195    4610 start.go:83] releasing machines lock for "running-upgrade-356000", held for 695.88875ms
	I0919 12:19:08.341268    4610 ssh_runner.go:195] Run: cat /version.json
	I0919 12:19:08.341279    4610 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/running-upgrade-356000/id_rsa Username:docker}
	I0919 12:19:08.341268    4610 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 12:19:08.341306    4610 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/running-upgrade-356000/id_rsa Username:docker}
	W0919 12:19:08.341875    4610 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50268: connect: connection refused
	I0919 12:19:08.341895    4610 retry.go:31] will retry after 279.68449ms: dial tcp [::1]:50268: connect: connection refused
	W0919 12:19:08.374816    4610 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0919 12:19:08.374869    4610 ssh_runner.go:195] Run: systemctl --version
	I0919 12:19:08.376798    4610 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 12:19:08.378461    4610 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 12:19:08.378487    4610 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0919 12:19:08.381670    4610 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0919 12:19:08.386267    4610 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 12:19:08.386274    4610 start.go:495] detecting cgroup driver to use...
	I0919 12:19:08.386346    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 12:19:08.391606    4610 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0919 12:19:08.394365    4610 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 12:19:08.397583    4610 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 12:19:08.397609    4610 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 12:19:08.401339    4610 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 12:19:08.404586    4610 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 12:19:08.407314    4610 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 12:19:08.410270    4610 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 12:19:08.413460    4610 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 12:19:08.416729    4610 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 12:19:08.419469    4610 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 12:19:08.422311    4610 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 12:19:08.424790    4610 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 12:19:08.427584    4610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:19:08.524880    4610 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 12:19:08.531686    4610 start.go:495] detecting cgroup driver to use...
	I0919 12:19:08.531756    4610 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 12:19:08.540044    4610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 12:19:08.545553    4610 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 12:19:08.557392    4610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 12:19:08.562409    4610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 12:19:08.567398    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 12:19:08.572651    4610 ssh_runner.go:195] Run: which cri-dockerd
	I0919 12:19:08.573934    4610 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 12:19:08.576926    4610 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 12:19:08.581748    4610 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 12:19:08.676725    4610 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 12:19:08.765071    4610 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 12:19:08.765131    4610 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0919 12:19:08.770890    4610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:19:08.859234    4610 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 12:19:10.479829    4610 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.620624208s)
	I0919 12:19:10.479888    4610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 12:19:10.484852    4610 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 12:19:10.491515    4610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 12:19:10.496921    4610 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 12:19:10.584556    4610 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 12:19:10.662006    4610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:19:10.747832    4610 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 12:19:10.754109    4610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 12:19:10.759000    4610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:19:10.835987    4610 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 12:19:10.875706    4610 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 12:19:10.875792    4610 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 12:19:10.878492    4610 start.go:563] Will wait 60s for crictl version
	I0919 12:19:10.878561    4610 ssh_runner.go:195] Run: which crictl
	I0919 12:19:10.880025    4610 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 12:19:10.891428    4610 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0919 12:19:10.891511    4610 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 12:19:10.904659    4610 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 12:19:10.927097    4610 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0919 12:19:10.927196    4610 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0919 12:19:10.928652    4610 kubeadm.go:883] updating cluster {Name:running-upgrade-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50300 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0919 12:19:10.928696    4610 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0919 12:19:10.928741    4610 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 12:19:10.938976    4610 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 12:19:10.938990    4610 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0919 12:19:10.939045    4610 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 12:19:10.942283    4610 ssh_runner.go:195] Run: which lz4
	I0919 12:19:10.943662    4610 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 12:19:10.944904    4610 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 12:19:10.944913    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0919 12:19:11.891703    4610 docker.go:649] duration metric: took 948.10575ms to copy over tarball
	I0919 12:19:11.891773    4610 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 12:19:13.017342    4610 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.125585959s)
	I0919 12:19:13.017356    4610 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 12:19:13.033646    4610 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 12:19:13.037313    4610 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0919 12:19:13.042458    4610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:19:13.135536    4610 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 12:19:13.456218    4610 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 12:19:13.469637    4610 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 12:19:13.469655    4610 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0919 12:19:13.469660    4610 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 12:19:13.473747    4610 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:19:13.475393    4610 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:19:13.477531    4610 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:19:13.478011    4610 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:19:13.480309    4610 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:19:13.480488    4610 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:19:13.482158    4610 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:19:13.482196    4610 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:19:13.483629    4610 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:19:13.484084    4610 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0919 12:19:13.485204    4610 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:19:13.485382    4610 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:19:13.486554    4610 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0919 12:19:13.486947    4610 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0919 12:19:13.488057    4610 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:19:13.489012    4610 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0919 12:19:13.927129    4610 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:19:13.932345    4610 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:19:13.936581    4610 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:19:13.947999    4610 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0919 12:19:13.948028    4610 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:19:13.948101    4610 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:19:13.953298    4610 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:19:13.957672    4610 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0919 12:19:13.957699    4610 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:19:13.957769    4610 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:19:13.964118    4610 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0919 12:19:13.964144    4610 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:19:13.964220    4610 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:19:13.976482    4610 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0919 12:19:13.977557    4610 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0919 12:19:13.977574    4610 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:19:13.977634    4610 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:19:13.978874    4610 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0919 12:19:13.981203    4610 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0919 12:19:13.982718    4610 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0919 12:19:13.985600    4610 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0919 12:19:13.993256    4610 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0919 12:19:14.000136    4610 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0919 12:19:14.000161    4610 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0919 12:19:14.000227    4610 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0919 12:19:14.003493    4610 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0919 12:19:14.003627    4610 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:19:14.005605    4610 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0919 12:19:14.005626    4610 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0919 12:19:14.005669    4610 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0919 12:19:14.012215    4610 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0919 12:19:14.012351    4610 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0919 12:19:14.028745    4610 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0919 12:19:14.028767    4610 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:19:14.028796    4610 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0919 12:19:14.028811    4610 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0919 12:19:14.028827    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0919 12:19:14.028850    4610 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:19:14.041082    4610 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0919 12:19:14.041220    4610 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0919 12:19:14.042823    4610 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0919 12:19:14.042836    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0919 12:19:14.047194    4610 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0919 12:19:14.047205    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0919 12:19:14.097994    4610 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0919 12:19:14.098025    4610 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0919 12:19:14.098031    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0919 12:19:14.136565    4610 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0919 12:19:14.449068    4610 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0919 12:19:14.449634    4610 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:19:14.484351    4610 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0919 12:19:14.484397    4610 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:19:14.484537    4610 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:19:14.985088    4610 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0919 12:19:14.985607    4610 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0919 12:19:14.991079    4610 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0919 12:19:14.991116    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0919 12:19:15.045311    4610 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0919 12:19:15.045327    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0919 12:19:15.279312    4610 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0919 12:19:15.279354    4610 cache_images.go:92] duration metric: took 1.8097365s to LoadCachedImages
	W0919 12:19:15.279398    4610 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0919 12:19:15.279403    4610 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0919 12:19:15.279461    4610 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-356000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 12:19:15.279549    4610 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 12:19:15.292947    4610 cni.go:84] Creating CNI manager for ""
	I0919 12:19:15.292962    4610 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:19:15.292971    4610 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 12:19:15.292981    4610 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-356000 NodeName:running-upgrade-356000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 12:19:15.293045    4610 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-356000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 12:19:15.293110    4610 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0919 12:19:15.296924    4610 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 12:19:15.296960    4610 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 12:19:15.299862    4610 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0919 12:19:15.304953    4610 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 12:19:15.309951    4610 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0919 12:19:15.315626    4610 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0919 12:19:15.317012    4610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:19:15.398911    4610 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 12:19:15.404294    4610 certs.go:68] Setting up /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000 for IP: 10.0.2.15
	I0919 12:19:15.404300    4610 certs.go:194] generating shared ca certs ...
	I0919 12:19:15.404309    4610 certs.go:226] acquiring lock for ca certs: {Name:mk207a98b59455406f5fa19947ac5c81f6753b77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:19:15.404463    4610 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.key
	I0919 12:19:15.404516    4610 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.key
	I0919 12:19:15.404523    4610 certs.go:256] generating profile certs ...
	I0919 12:19:15.404596    4610 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/client.key
	I0919 12:19:15.404612    4610 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/apiserver.key.35c1d5ab
	I0919 12:19:15.404620    4610 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/apiserver.crt.35c1d5ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0919 12:19:15.515211    4610 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/apiserver.crt.35c1d5ab ...
	I0919 12:19:15.515220    4610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/apiserver.crt.35c1d5ab: {Name:mk1436684860a61f77abebff2a2027c10d94929f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:19:15.515686    4610 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/apiserver.key.35c1d5ab ...
	I0919 12:19:15.515696    4610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/apiserver.key.35c1d5ab: {Name:mk4b7ce20da47cb46d5659e9b6a204a1a3006ace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:19:15.515866    4610 certs.go:381] copying /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/apiserver.crt.35c1d5ab -> /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/apiserver.crt
	I0919 12:19:15.515997    4610 certs.go:385] copying /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/apiserver.key.35c1d5ab -> /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/apiserver.key
	I0919 12:19:15.516172    4610 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/proxy-client.key
	I0919 12:19:15.516302    4610 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/1618.pem (1338 bytes)
	W0919 12:19:15.516331    4610 certs.go:480] ignoring /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/1618_empty.pem, impossibly tiny 0 bytes
	I0919 12:19:15.516338    4610 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 12:19:15.516360    4610 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem (1078 bytes)
	I0919 12:19:15.516381    4610 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem (1123 bytes)
	I0919 12:19:15.516403    4610 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/key.pem (1679 bytes)
	I0919 12:19:15.516450    4610 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/ssl/certs/16182.pem (1708 bytes)
	I0919 12:19:15.516769    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 12:19:15.524179    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 12:19:15.530541    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 12:19:15.537678    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 12:19:15.544976    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 12:19:15.551757    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 12:19:15.558247    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 12:19:15.565517    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 12:19:15.573057    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/1618.pem --> /usr/share/ca-certificates/1618.pem (1338 bytes)
	I0919 12:19:15.579847    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/ssl/certs/16182.pem --> /usr/share/ca-certificates/16182.pem (1708 bytes)
	I0919 12:19:15.586290    4610 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 12:19:15.593100    4610 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 12:19:15.598069    4610 ssh_runner.go:195] Run: openssl version
	I0919 12:19:15.599852    4610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16182.pem && ln -fs /usr/share/ca-certificates/16182.pem /etc/ssl/certs/16182.pem"
	I0919 12:19:15.602581    4610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16182.pem
	I0919 12:19:15.603992    4610 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 18:54 /usr/share/ca-certificates/16182.pem
	I0919 12:19:15.604021    4610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16182.pem
	I0919 12:19:15.605787    4610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16182.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 12:19:15.608803    4610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 12:19:15.611635    4610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 12:19:15.613042    4610 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0919 12:19:15.613067    4610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 12:19:15.614843    4610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 12:19:15.617895    4610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1618.pem && ln -fs /usr/share/ca-certificates/1618.pem /etc/ssl/certs/1618.pem"
	I0919 12:19:15.621093    4610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1618.pem
	I0919 12:19:15.622571    4610 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 18:54 /usr/share/ca-certificates/1618.pem
	I0919 12:19:15.622590    4610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1618.pem
	I0919 12:19:15.624383    4610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1618.pem /etc/ssl/certs/51391683.0"
	I0919 12:19:15.626936    4610 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 12:19:15.628329    4610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 12:19:15.630142    4610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 12:19:15.632010    4610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 12:19:15.633736    4610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 12:19:15.635615    4610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 12:19:15.637316    4610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 12:19:15.639034    4610 kubeadm.go:392] StartCluster: {Name:running-upgrade-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50300 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0919 12:19:15.639102    4610 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 12:19:15.649377    4610 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 12:19:15.653016    4610 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 12:19:15.653025    4610 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0919 12:19:15.653052    4610 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 12:19:15.655714    4610 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 12:19:15.655990    4610 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-356000" does not appear in /Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:19:15.656049    4610 kubeconfig.go:62] /Users/jenkins/minikube-integration/19664-1099/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-356000" cluster setting kubeconfig missing "running-upgrade-356000" context setting]
	I0919 12:19:15.656187    4610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/kubeconfig: {Name:mk8a8f1f5779f30829ec51973ad05815f1640da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:19:15.656844    4610 kapi.go:59] client config for running-upgrade-356000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/client.key", CAFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1025a5800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 12:19:15.657180    4610 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 12:19:15.660020    4610 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-356000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0919 12:19:15.660026    4610 kubeadm.go:1160] stopping kube-system containers ...
	I0919 12:19:15.660075    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 12:19:15.670922    4610 docker.go:483] Stopping containers: [10dca9d5f343 5fa579beb9a3 b07198bb3ff0 51ba9aca8c7e 103fc45092f8 3652994714e2 32dca4ac5ee1 e2b28bfdabb8 9161fcde3ffa d94e11607117 d18d4dc279d8 fed24a87db93]
	I0919 12:19:15.671010    4610 ssh_runner.go:195] Run: docker stop 10dca9d5f343 5fa579beb9a3 b07198bb3ff0 51ba9aca8c7e 103fc45092f8 3652994714e2 32dca4ac5ee1 e2b28bfdabb8 9161fcde3ffa d94e11607117 d18d4dc279d8 fed24a87db93
	I0919 12:19:15.682092    4610 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 12:19:15.780939    4610 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 12:19:15.785237    4610 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 19 19:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 19 19:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 19 19:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 19 19:18 /etc/kubernetes/scheduler.conf
	
	I0919 12:19:15.785274    4610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/admin.conf
	I0919 12:19:15.789200    4610 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0919 12:19:15.789235    4610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 12:19:15.792676    4610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/kubelet.conf
	I0919 12:19:15.796156    4610 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0919 12:19:15.796185    4610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 12:19:15.799062    4610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/controller-manager.conf
	I0919 12:19:15.802009    4610 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0919 12:19:15.802038    4610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 12:19:15.805931    4610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/scheduler.conf
	I0919 12:19:15.809122    4610 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0919 12:19:15.809151    4610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 12:19:15.811752    4610 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 12:19:15.814451    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:19:15.845584    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:19:16.392248    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:19:16.737692    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:19:16.770934    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:19:16.796078    4610 api_server.go:52] waiting for apiserver process to appear ...
	I0919 12:19:16.796166    4610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:19:17.298610    4610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:19:17.797475    4610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:19:17.807295    4610 api_server.go:72] duration metric: took 1.011247s to wait for apiserver process to appear ...
	I0919 12:19:17.807305    4610 api_server.go:88] waiting for apiserver healthz status ...
	I0919 12:19:17.807314    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:19:22.809391    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:19:22.809541    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:19:27.810341    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:19:27.810454    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:19:32.811327    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:19:32.811364    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:19:37.812183    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:19:37.812203    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:19:42.813181    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:19:42.813265    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:19:47.815034    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:19:47.815077    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:19:52.817094    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:19:52.817189    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:19:57.819837    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:19:57.819935    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:20:02.820774    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:20:02.820873    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:20:07.822176    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:20:07.822262    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:20:12.824905    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:20:12.824994    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:20:17.827680    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:20:17.828247    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:20:17.867192    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:20:17.867360    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:20:17.889931    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:20:17.890090    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:20:17.908699    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:20:17.908804    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:20:17.920761    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:20:17.920848    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:20:17.931511    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:20:17.931596    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:20:17.942348    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:20:17.942433    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:20:17.952891    4610 logs.go:276] 0 containers: []
	W0919 12:20:17.952906    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:20:17.952980    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:20:17.971917    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:20:17.971935    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:20:17.971940    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:20:17.993552    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:20:17.993567    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:20:18.005845    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:20:18.005860    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:20:18.020074    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:20:18.020087    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:20:18.024390    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:20:18.024400    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:20:18.036057    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:20:18.036067    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:20:18.054067    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:20:18.054080    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:20:18.078855    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:20:18.078862    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:20:18.093768    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:20:18.093779    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:20:18.116873    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:20:18.116883    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:20:18.131461    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:20:18.131472    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:20:18.145669    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:20:18.145679    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:20:18.215728    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:20:18.215743    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:20:18.235149    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:20:18.235159    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:20:18.246979    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:20:18.246991    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:20:18.259196    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:20:18.259207    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:20:18.270707    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:20:18.270717    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:20:20.807467    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:20:25.808193    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:20:25.808779    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:20:25.846002    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:20:25.846169    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:20:25.867292    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:20:25.867412    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:20:25.882557    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:20:25.882653    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:20:25.895408    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:20:25.895499    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:20:25.905892    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:20:25.905979    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:20:25.916234    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:20:25.916319    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:20:25.926795    4610 logs.go:276] 0 containers: []
	W0919 12:20:25.926808    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:20:25.926880    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:20:25.937269    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:20:25.937289    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:20:25.937296    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:20:25.948700    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:20:25.948712    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:20:25.974016    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:20:25.974026    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:20:25.986074    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:20:25.986087    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:20:26.004709    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:20:26.004720    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:20:26.025629    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:20:26.025641    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:20:26.043731    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:20:26.043746    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:20:26.060052    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:20:26.060062    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:20:26.097248    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:20:26.097256    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:20:26.110949    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:20:26.110961    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:20:26.128647    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:20:26.128657    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:20:26.139435    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:20:26.139446    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:20:26.153725    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:20:26.153734    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:20:26.171264    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:20:26.171275    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:20:26.182883    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:20:26.182897    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:20:26.187213    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:20:26.187219    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:20:26.221945    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:20:26.221954    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:20:28.739533    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:20:33.742194    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:20:33.742798    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:20:33.781344    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:20:33.781514    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:20:33.803280    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:20:33.803396    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:20:33.818185    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:20:33.818292    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:20:33.831111    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:20:33.831199    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:20:33.842249    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:20:33.842328    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:20:33.852445    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:20:33.852533    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:20:33.862097    4610 logs.go:276] 0 containers: []
	W0919 12:20:33.862106    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:20:33.862166    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:20:33.872716    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:20:33.872739    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:20:33.872743    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:20:33.891863    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:20:33.891874    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:20:33.905598    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:20:33.905608    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:20:33.920508    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:20:33.920518    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:20:33.932085    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:20:33.932095    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:20:33.957789    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:20:33.957797    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:20:33.971159    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:20:33.971173    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:20:33.983176    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:20:33.983186    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:20:34.001008    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:20:34.001020    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:20:34.012185    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:20:34.012195    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:20:34.023671    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:20:34.023679    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:20:34.057347    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:20:34.057354    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:20:34.061213    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:20:34.061220    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:20:34.096211    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:20:34.096225    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:20:34.109957    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:20:34.109969    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:20:34.120955    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:20:34.120966    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:20:34.133816    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:20:34.133832    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:20:36.647043    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:20:41.649794    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:20:41.650412    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:20:41.687723    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:20:41.687897    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:20:41.708698    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:20:41.708822    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:20:41.723344    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:20:41.723436    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:20:41.735910    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:20:41.735997    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:20:41.752002    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:20:41.752069    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:20:41.762683    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:20:41.762766    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:20:41.772782    4610 logs.go:276] 0 containers: []
	W0919 12:20:41.772799    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:20:41.772870    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:20:41.783554    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:20:41.783570    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:20:41.783574    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:20:41.787853    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:20:41.787861    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:20:41.802413    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:20:41.802424    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:20:41.821816    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:20:41.821827    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:20:41.837114    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:20:41.837125    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:20:41.848569    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:20:41.848578    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:20:41.861145    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:20:41.861154    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:20:41.879763    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:20:41.879772    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:20:41.891698    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:20:41.891712    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:20:41.903830    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:20:41.903844    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:20:41.915752    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:20:41.915762    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:20:41.940181    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:20:41.940188    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:20:41.951366    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:20:41.951376    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:20:41.985869    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:20:41.985876    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:20:42.022255    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:20:42.022267    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:20:42.037032    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:20:42.037044    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:20:42.056094    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:20:42.056105    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:20:44.575332    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:20:49.578051    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:20:49.578515    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:20:49.614279    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:20:49.614435    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:20:49.635583    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:20:49.635702    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:20:49.650282    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:20:49.650378    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:20:49.662206    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:20:49.662289    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:20:49.672847    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:20:49.672932    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:20:49.683830    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:20:49.683911    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:20:49.694055    4610 logs.go:276] 0 containers: []
	W0919 12:20:49.694071    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:20:49.694138    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:20:49.704753    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:20:49.704769    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:20:49.704774    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:20:49.718949    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:20:49.718962    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:20:49.730438    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:20:49.730451    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:20:49.746895    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:20:49.746906    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:20:49.767291    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:20:49.767302    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:20:49.785798    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:20:49.785810    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:20:49.810538    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:20:49.810545    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:20:49.815052    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:20:49.815059    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:20:49.850273    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:20:49.850287    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:20:49.869181    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:20:49.869191    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:20:49.880472    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:20:49.880484    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:20:49.892341    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:20:49.892350    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:20:49.904074    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:20:49.904085    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:20:49.916040    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:20:49.916053    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:20:49.950195    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:20:49.950202    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:20:49.963686    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:20:49.963696    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:20:49.974833    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:20:49.974844    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:20:52.492586    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:20:57.493971    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:20:57.494254    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:20:57.519958    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:20:57.520071    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:20:57.540938    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:20:57.541036    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:20:57.552029    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:20:57.552101    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:20:57.564312    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:20:57.564385    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:20:57.576880    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:20:57.576971    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:20:57.587221    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:20:57.587307    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:20:57.597418    4610 logs.go:276] 0 containers: []
	W0919 12:20:57.597432    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:20:57.597499    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:20:57.607723    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:20:57.607743    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:20:57.607748    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:20:57.618700    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:20:57.618712    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:20:57.644595    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:20:57.644606    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:20:57.656440    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:20:57.656451    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:20:57.675171    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:20:57.675183    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:20:57.687668    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:20:57.687682    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:20:57.698572    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:20:57.698583    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:20:57.710475    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:20:57.710487    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:20:57.728926    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:20:57.728940    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:20:57.733796    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:20:57.733802    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:20:57.772307    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:20:57.772322    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:20:57.787604    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:20:57.787617    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:20:57.798868    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:20:57.798880    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:20:57.835288    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:20:57.835297    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:20:57.846937    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:20:57.846948    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:20:57.861326    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:20:57.861337    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:20:57.875805    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:20:57.875813    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:21:00.392128    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:21:05.393412    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:21:05.393865    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:21:05.423574    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:21:05.423735    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:21:05.441889    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:21:05.441990    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:21:05.455357    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:21:05.455438    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:21:05.470726    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:21:05.470817    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:21:05.481130    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:21:05.481212    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:21:05.501014    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:21:05.501101    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:21:05.511970    4610 logs.go:276] 0 containers: []
	W0919 12:21:05.511983    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:21:05.512053    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:21:05.532101    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:21:05.532121    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:21:05.532126    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:21:05.544358    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:21:05.544369    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:21:05.567981    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:21:05.567987    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:21:05.601790    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:21:05.601798    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:21:05.606549    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:21:05.606557    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:21:05.618977    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:21:05.618986    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:21:05.637877    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:21:05.637888    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:21:05.652056    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:21:05.652067    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:21:05.663423    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:21:05.663433    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:21:05.677299    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:21:05.677309    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:21:05.694316    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:21:05.694325    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:21:05.705805    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:21:05.705814    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:21:05.717623    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:21:05.717634    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:21:05.732728    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:21:05.732736    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:21:05.745112    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:21:05.745124    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:21:05.779706    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:21:05.779720    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:21:05.797437    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:21:05.797448    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:21:08.309069    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:21:13.310608    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:21:13.311189    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:21:13.348410    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:21:13.348576    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:21:13.370506    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:21:13.370638    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:21:13.385744    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:21:13.385819    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:21:13.398095    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:21:13.398177    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:21:13.409261    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:21:13.409347    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:21:13.424092    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:21:13.424177    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:21:13.434548    4610 logs.go:276] 0 containers: []
	W0919 12:21:13.434560    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:21:13.434635    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:21:13.449723    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:21:13.449740    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:21:13.449746    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:21:13.461374    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:21:13.461388    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:21:13.473583    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:21:13.473592    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:21:13.511357    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:21:13.511365    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:21:13.535095    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:21:13.535107    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:21:13.547138    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:21:13.547147    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:21:13.564369    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:21:13.564379    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:21:13.578360    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:21:13.578371    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:21:13.596634    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:21:13.596645    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:21:13.608065    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:21:13.608075    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:21:13.642556    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:21:13.642564    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:21:13.656337    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:21:13.656346    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:21:13.670478    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:21:13.670489    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:21:13.684305    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:21:13.684315    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:21:13.689228    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:21:13.689237    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:21:13.703326    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:21:13.703336    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:21:13.718761    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:21:13.718772    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:21:16.245069    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:21:21.247846    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:21:21.248742    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:21:21.278430    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:21:21.278583    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:21:21.305121    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:21:21.305212    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:21:21.317081    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:21:21.317164    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:21:21.327933    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:21:21.328001    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:21:21.338769    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:21:21.338856    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:21:21.349942    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:21:21.350026    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:21:21.365163    4610 logs.go:276] 0 containers: []
	W0919 12:21:21.365177    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:21:21.365239    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:21:21.376189    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:21:21.376205    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:21:21.376211    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:21:21.387719    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:21:21.387730    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:21:21.412693    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:21:21.412703    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:21:21.424721    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:21:21.424736    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:21:21.429246    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:21:21.429255    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:21:21.443968    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:21:21.443977    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:21:21.457106    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:21:21.457117    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:21:21.468121    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:21:21.468131    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:21:21.479097    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:21:21.479107    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:21:21.492129    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:21:21.492139    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:21:21.505745    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:21:21.505754    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:21:21.542146    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:21:21.542152    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:21:21.578931    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:21:21.578941    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:21:21.598055    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:21:21.598065    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:21:21.616949    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:21:21.616962    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:21:21.639076    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:21:21.639088    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:21:21.658207    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:21:21.658217    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:21:24.180131    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:21:29.182785    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:21:29.183356    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:21:29.220976    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:21:29.221135    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:21:29.249751    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:21:29.249865    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:21:29.263791    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:21:29.263868    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:21:29.276074    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:21:29.276169    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:21:29.286614    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:21:29.286703    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:21:29.297523    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:21:29.297596    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:21:29.307522    4610 logs.go:276] 0 containers: []
	W0919 12:21:29.307537    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:21:29.307612    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:21:29.318148    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:21:29.318169    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:21:29.318174    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:21:29.322889    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:21:29.322895    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:21:29.337598    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:21:29.337610    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:21:29.352876    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:21:29.352888    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:21:29.364152    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:21:29.364162    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:21:29.377056    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:21:29.377069    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:21:29.388793    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:21:29.388807    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:21:29.403361    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:21:29.403373    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:21:29.415495    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:21:29.415509    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:21:29.427883    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:21:29.427894    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:21:29.443013    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:21:29.443024    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:21:29.454418    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:21:29.454428    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:21:29.488664    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:21:29.488676    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:21:29.524883    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:21:29.524892    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:21:29.543181    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:21:29.543190    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:21:29.557321    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:21:29.557334    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:21:29.581279    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:21:29.581290    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:21:32.108697    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:21:37.110894    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:21:37.111472    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:21:37.155881    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:21:37.156038    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:21:37.175703    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:21:37.175820    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:21:37.190960    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:21:37.191053    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:21:37.203097    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:21:37.203187    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:21:37.213713    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:21:37.213793    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:21:37.230060    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:21:37.230144    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:21:37.240466    4610 logs.go:276] 0 containers: []
	W0919 12:21:37.240478    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:21:37.240543    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:21:37.251169    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:21:37.251188    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:21:37.251193    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:21:37.264765    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:21:37.264774    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:21:37.280151    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:21:37.280164    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:21:37.299139    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:21:37.299149    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:21:37.323252    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:21:37.323264    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:21:37.343690    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:21:37.343701    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:21:37.354606    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:21:37.354619    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:21:37.366492    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:21:37.366501    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:21:37.378548    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:21:37.378561    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:21:37.396437    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:21:37.396447    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:21:37.408448    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:21:37.408462    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:21:37.422672    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:21:37.422684    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:21:37.427340    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:21:37.427346    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:21:37.462860    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:21:37.462871    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:21:37.475422    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:21:37.475433    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:21:37.486658    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:21:37.486671    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:21:37.497969    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:21:37.497981    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:21:40.036960    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:21:45.039079    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:21:45.039222    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:21:45.053483    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:21:45.053581    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:21:45.066682    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:21:45.066771    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:21:45.079280    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:21:45.079365    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:21:45.090059    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:21:45.090145    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:21:45.100880    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:21:45.100966    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:21:45.111554    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:21:45.111634    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:21:45.122102    4610 logs.go:276] 0 containers: []
	W0919 12:21:45.122120    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:21:45.122195    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:21:45.133204    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:21:45.133225    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:21:45.133230    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:21:45.172031    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:21:45.172052    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:21:45.192318    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:21:45.192336    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:21:45.217925    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:21:45.217937    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:21:45.224061    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:21:45.224072    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:21:45.239401    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:21:45.239412    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:21:45.257484    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:21:45.257495    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:21:45.269694    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:21:45.269703    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:21:45.281804    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:21:45.281821    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:21:45.295841    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:21:45.295856    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:21:45.311725    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:21:45.311737    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:21:45.326936    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:21:45.326944    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:21:45.338454    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:21:45.338465    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:21:45.372834    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:21:45.372849    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:21:45.384792    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:21:45.384803    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:21:45.396598    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:21:45.396607    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:21:45.407753    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:21:45.407768    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:21:47.932766    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:21:52.933327    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:21:52.933461    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:21:52.947170    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:21:52.947253    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:21:52.957872    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:21:52.957955    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:21:52.969381    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:21:52.969478    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:21:52.979870    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:21:52.979960    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:21:52.991006    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:21:52.991092    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:21:53.002112    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:21:53.002191    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:21:53.012888    4610 logs.go:276] 0 containers: []
	W0919 12:21:53.012905    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:21:53.012979    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:21:53.023469    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:21:53.023489    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:21:53.023494    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:21:53.042525    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:21:53.042540    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:21:53.062127    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:21:53.062139    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:21:53.078539    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:21:53.078554    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:21:53.090433    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:21:53.090443    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:21:53.102544    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:21:53.102554    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:21:53.137789    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:21:53.137799    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:21:53.153006    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:21:53.153018    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:21:53.164997    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:21:53.165007    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:21:53.178060    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:21:53.178070    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:21:53.216969    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:21:53.216982    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:21:53.231653    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:21:53.231665    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:21:53.244155    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:21:53.244169    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:21:53.267925    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:21:53.267940    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:21:53.272353    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:21:53.272365    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:21:53.287431    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:21:53.287444    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:21:53.306041    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:21:53.306060    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:21:55.833747    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:00.834192    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:00.834321    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:00.845742    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:00.845829    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:00.856965    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:00.857043    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:00.868054    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:00.868126    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:00.878868    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:00.878960    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:00.892783    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:00.892866    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:00.903502    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:00.903590    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:00.913773    4610 logs.go:276] 0 containers: []
	W0919 12:22:00.913784    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:00.913855    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:00.924968    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:00.924989    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:00.924995    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:00.939652    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:00.939662    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:00.961647    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:00.961660    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:00.974314    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:00.974325    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:01.011338    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:01.011357    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:01.022974    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:01.022985    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:01.034759    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:01.034772    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:01.059158    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:01.059169    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:01.081199    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:01.081216    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:01.118338    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:01.118351    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:01.133485    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:01.133496    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:01.149116    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:01.149129    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:01.161658    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:01.161674    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:01.179340    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:01.179352    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:01.191544    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:01.191560    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:01.202652    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:01.202663    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:01.206860    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:01.206865    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:03.718774    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:08.721408    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:08.721643    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:08.741925    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:08.742050    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:08.756646    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:08.756746    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:08.769398    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:08.769493    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:08.780487    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:08.780575    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:08.795287    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:08.795373    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:08.805864    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:08.805960    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:08.816578    4610 logs.go:276] 0 containers: []
	W0919 12:22:08.816590    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:08.816661    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:08.827140    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:08.827158    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:08.827163    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:08.838487    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:08.838498    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:08.842790    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:08.842798    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:08.856961    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:08.856972    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:08.871975    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:08.871986    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:08.893931    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:08.893942    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:08.906129    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:08.906140    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:08.924713    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:08.924723    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:08.936259    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:08.936270    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:08.960141    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:08.960149    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:08.996871    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:08.996884    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:09.008390    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:09.008401    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:09.020245    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:09.020259    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:09.056556    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:09.056566    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:09.070899    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:09.070912    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:09.087385    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:09.087398    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:09.105232    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:09.105243    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:11.619576    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:16.621804    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:16.622363    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:16.659370    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:16.659543    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:16.680691    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:16.680818    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:16.695792    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:16.695886    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:16.708709    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:16.708803    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:16.719545    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:16.719616    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:16.734802    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:16.734887    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:16.744539    4610 logs.go:276] 0 containers: []
	W0919 12:22:16.744553    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:16.744627    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:16.755697    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:16.755727    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:16.755733    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:16.767374    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:16.767383    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:16.784982    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:16.784993    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:16.796825    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:16.796836    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:16.809037    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:16.809047    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:16.844537    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:16.844546    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:16.849483    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:16.849491    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:16.862991    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:16.863003    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:16.874389    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:16.874401    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:16.899249    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:16.899256    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:16.919342    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:16.919353    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:16.932968    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:16.932979    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:16.944131    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:16.944143    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:16.956019    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:16.956033    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:16.970863    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:16.970877    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:17.026893    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:17.026911    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:17.050671    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:17.050686    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:19.567589    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:24.569736    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:24.569815    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:24.582159    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:24.582238    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:24.593014    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:24.593087    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:24.603635    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:24.603699    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:24.613837    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:24.613916    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:24.624507    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:24.624587    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:24.636039    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:24.636115    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:24.646639    4610 logs.go:276] 0 containers: []
	W0919 12:22:24.646649    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:24.646715    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:24.657186    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:24.657201    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:24.657207    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:24.693738    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:24.693749    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:24.715360    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:24.715370    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:24.738166    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:24.738175    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:24.750428    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:24.750444    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:24.764687    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:24.764697    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:24.779463    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:24.779475    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:24.792423    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:24.792438    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:24.813014    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:24.813027    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:24.826399    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:24.826415    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:24.838932    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:24.838944    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:24.850731    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:24.850741    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:24.874459    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:24.874475    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:24.887179    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:24.887189    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:24.924653    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:24.924673    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:24.929670    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:24.929678    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:24.944218    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:24.944233    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:27.461648    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:32.464275    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:32.464492    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:32.476411    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:32.476508    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:32.487880    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:32.487976    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:32.501100    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:32.501184    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:32.511520    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:32.511604    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:32.526530    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:32.526617    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:32.537828    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:32.537910    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:32.548333    4610 logs.go:276] 0 containers: []
	W0919 12:22:32.548344    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:32.548416    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:32.560399    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:32.560419    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:32.560425    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:32.572550    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:32.572561    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:32.584085    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:32.584097    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:32.619735    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:32.619742    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:32.624143    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:32.624149    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:32.659523    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:32.659534    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:32.674432    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:32.674443    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:32.688675    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:32.688686    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:32.708054    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:32.708071    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:32.723078    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:32.723088    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:32.748140    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:32.748147    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:32.761051    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:32.761063    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:32.773927    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:32.773938    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:32.792474    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:32.792490    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:32.804788    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:32.804798    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:32.817048    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:32.817058    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:32.837129    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:32.837139    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:35.349855    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:40.352046    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:40.352506    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:40.387500    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:40.387656    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:40.410428    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:40.410544    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:40.424399    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:40.424491    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:40.437958    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:40.438047    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:40.447974    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:40.448060    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:40.458791    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:40.458878    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:40.468964    4610 logs.go:276] 0 containers: []
	W0919 12:22:40.468977    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:40.469043    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:40.480260    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:40.480286    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:40.480295    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:40.521455    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:40.521468    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:40.539489    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:40.539500    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:40.550665    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:40.550676    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:40.567393    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:40.567404    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:40.578540    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:40.578554    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:40.601156    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:40.601163    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:40.605153    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:40.605162    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:40.619448    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:40.619461    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:40.631569    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:40.631579    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:40.642190    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:40.642200    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:40.653947    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:40.653961    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:40.687834    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:40.687844    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:40.710620    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:40.710630    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:40.725957    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:40.725968    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:40.739823    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:40.739835    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:40.755897    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:40.755908    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:43.267983    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:48.268509    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:48.268614    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:48.281339    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:48.281432    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:48.293287    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:48.293377    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:48.305210    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:48.305301    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:48.319754    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:48.319851    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:48.338091    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:48.338181    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:48.354297    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:48.354385    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:48.366180    4610 logs.go:276] 0 containers: []
	W0919 12:22:48.366191    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:48.366270    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:48.377702    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:48.377720    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:48.377726    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:48.393270    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:48.393287    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:48.409206    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:48.409222    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:48.426373    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:48.426386    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:48.450948    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:48.450966    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:48.463501    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:48.463513    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:48.478219    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:48.478231    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:48.516815    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:48.516834    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:48.557664    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:48.557677    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:48.573108    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:48.573119    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:48.588613    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:48.588626    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:48.601879    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:48.601891    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:48.614348    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:48.614365    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:48.619137    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:48.619152    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:48.632593    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:48.632606    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:48.645740    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:48.645751    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:48.665077    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:48.665091    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:51.187031    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:56.189159    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:56.189643    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:56.227892    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:56.228034    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:56.246239    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:56.246360    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:56.259937    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:56.260032    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:56.274336    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:56.274422    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:56.284773    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:56.284853    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:56.295651    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:56.295731    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:56.306339    4610 logs.go:276] 0 containers: []
	W0919 12:22:56.306351    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:56.306420    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:56.317872    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:56.317890    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:56.317895    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:56.333231    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:56.333242    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:56.347971    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:56.347983    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:56.366321    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:56.366333    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:56.380057    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:56.380072    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:56.393981    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:56.393992    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:56.416193    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:56.416200    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:56.450392    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:56.450401    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:56.454672    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:56.454678    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:56.468131    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:56.468141    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:56.485645    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:56.485655    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:56.499085    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:56.499098    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:56.537208    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:56.537219    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:56.556336    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:56.556353    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:56.571651    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:56.571665    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:56.588900    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:56.588911    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:56.600755    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:56.600768    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:59.119030    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:04.121264    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:04.121662    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:23:04.149481    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:23:04.149623    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:23:04.169157    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:23:04.169266    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:23:04.183572    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:23:04.183675    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:23:04.195295    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:23:04.195383    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:23:04.207235    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:23:04.207317    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:23:04.218773    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:23:04.218867    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:23:04.228980    4610 logs.go:276] 0 containers: []
	W0919 12:23:04.228994    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:23:04.229062    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:23:04.239578    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:23:04.239594    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:23:04.239599    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:23:04.251490    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:23:04.251502    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:23:04.263633    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:23:04.263648    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:23:04.268256    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:23:04.268263    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:23:04.280478    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:23:04.280488    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:23:04.296382    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:23:04.296395    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:23:04.308435    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:23:04.308447    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:23:04.330479    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:23:04.330488    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:23:04.365178    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:23:04.365193    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:23:04.377325    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:23:04.377336    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:23:04.401508    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:23:04.401523    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:23:04.416526    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:23:04.416538    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:23:04.452170    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:23:04.452178    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:23:04.468463    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:23:04.468474    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:23:04.488007    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:23:04.488022    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:23:04.504602    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:23:04.504612    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:23:04.518943    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:23:04.518953    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:23:07.032407    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:12.034071    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:12.034181    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:23:12.045295    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:23:12.045380    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:23:12.057309    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:23:12.057398    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:23:12.068129    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:23:12.068212    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:23:12.079446    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:23:12.079530    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:23:12.090971    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:23:12.091047    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:23:12.102997    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:23:12.103082    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:23:12.114863    4610 logs.go:276] 0 containers: []
	W0919 12:23:12.114876    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:23:12.114947    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:23:12.126339    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:23:12.126361    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:23:12.126367    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:23:12.131186    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:23:12.131198    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:23:12.151191    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:23:12.151206    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:23:12.164574    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:23:12.164587    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:23:12.182859    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:23:12.182873    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:23:12.195707    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:23:12.195720    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:23:12.212994    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:23:12.213010    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:23:12.229528    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:23:12.229542    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:23:12.250766    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:23:12.250777    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:23:12.273295    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:23:12.273308    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:23:12.297065    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:23:12.297082    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:23:12.310562    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:23:12.310576    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:23:12.348292    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:23:12.348311    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:23:12.360307    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:23:12.360320    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:23:12.384568    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:23:12.384581    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:23:12.418833    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:23:12.418845    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:23:12.431783    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:23:12.431794    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:23:14.956732    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:19.959318    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:19.959481    4610 kubeadm.go:597] duration metric: took 4m4.313125208s to restartPrimaryControlPlane
	W0919 12:23:19.959607    4610 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0919 12:23:19.959650    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0919 12:23:20.991296    4610 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.031659875s)
	I0919 12:23:20.991366    4610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 12:23:20.996422    4610 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 12:23:20.999376    4610 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 12:23:21.002012    4610 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 12:23:21.002018    4610 kubeadm.go:157] found existing configuration files:
	
	I0919 12:23:21.002046    4610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/admin.conf
	I0919 12:23:21.004421    4610 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 12:23:21.004447    4610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 12:23:21.007064    4610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/kubelet.conf
	I0919 12:23:21.009632    4610 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 12:23:21.009664    4610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 12:23:21.012263    4610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/controller-manager.conf
	I0919 12:23:21.015220    4610 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 12:23:21.015246    4610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 12:23:21.018530    4610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/scheduler.conf
	I0919 12:23:21.021038    4610 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 12:23:21.021062    4610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 12:23:21.023690    4610 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 12:23:21.043008    4610 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0919 12:23:21.043046    4610 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 12:23:21.092691    4610 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 12:23:21.092756    4610 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 12:23:21.092815    4610 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 12:23:21.145074    4610 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 12:23:21.149214    4610 out.go:235]   - Generating certificates and keys ...
	I0919 12:23:21.149247    4610 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 12:23:21.149278    4610 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 12:23:21.149316    4610 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 12:23:21.149346    4610 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0919 12:23:21.149379    4610 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 12:23:21.149410    4610 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0919 12:23:21.149467    4610 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0919 12:23:21.149607    4610 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0919 12:23:21.149644    4610 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 12:23:21.149679    4610 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 12:23:21.149698    4610 kubeadm.go:310] [certs] Using the existing "sa" key
	I0919 12:23:21.149728    4610 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 12:23:21.337170    4610 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 12:23:21.419296    4610 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 12:23:21.543500    4610 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 12:23:21.836183    4610 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 12:23:21.863841    4610 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 12:23:21.864277    4610 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 12:23:21.864368    4610 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 12:23:21.941299    4610 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 12:23:21.945523    4610 out.go:235]   - Booting up control plane ...
	I0919 12:23:21.945573    4610 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 12:23:21.945633    4610 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 12:23:21.945669    4610 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 12:23:21.945715    4610 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 12:23:21.945803    4610 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 12:23:26.452232    4610 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.507557 seconds
	I0919 12:23:26.452324    4610 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 12:23:26.457957    4610 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 12:23:26.973794    4610 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 12:23:26.973958    4610 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-356000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 12:23:27.474409    4610 kubeadm.go:310] [bootstrap-token] Using token: p5d3qd.ufb657tusl8cqnx2
	I0919 12:23:27.478152    4610 out.go:235]   - Configuring RBAC rules ...
	I0919 12:23:27.478211    4610 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 12:23:27.478253    4610 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 12:23:27.481927    4610 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 12:23:27.482833    4610 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 12:23:27.483649    4610 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 12:23:27.484376    4610 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 12:23:27.487756    4610 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 12:23:27.662664    4610 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 12:23:27.878103    4610 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 12:23:27.878469    4610 kubeadm.go:310] 
	I0919 12:23:27.878501    4610 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 12:23:27.878505    4610 kubeadm.go:310] 
	I0919 12:23:27.878548    4610 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 12:23:27.878553    4610 kubeadm.go:310] 
	I0919 12:23:27.878571    4610 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 12:23:27.878606    4610 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 12:23:27.878635    4610 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 12:23:27.878639    4610 kubeadm.go:310] 
	I0919 12:23:27.878672    4610 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 12:23:27.878677    4610 kubeadm.go:310] 
	I0919 12:23:27.878718    4610 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 12:23:27.878724    4610 kubeadm.go:310] 
	I0919 12:23:27.878757    4610 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 12:23:27.878808    4610 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 12:23:27.878851    4610 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 12:23:27.878854    4610 kubeadm.go:310] 
	I0919 12:23:27.878907    4610 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 12:23:27.878951    4610 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 12:23:27.878955    4610 kubeadm.go:310] 
	I0919 12:23:27.879000    4610 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p5d3qd.ufb657tusl8cqnx2 \
	I0919 12:23:27.879063    4610 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e0c2857de0258e65a9bba263f6157106d84e898a6b55abbe378b8f48b6c815 \
	I0919 12:23:27.879074    4610 kubeadm.go:310] 	--control-plane 
	I0919 12:23:27.879077    4610 kubeadm.go:310] 
	I0919 12:23:27.879130    4610 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 12:23:27.879135    4610 kubeadm.go:310] 
	I0919 12:23:27.879197    4610 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p5d3qd.ufb657tusl8cqnx2 \
	I0919 12:23:27.879246    4610 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e0c2857de0258e65a9bba263f6157106d84e898a6b55abbe378b8f48b6c815 
	I0919 12:23:27.879337    4610 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 12:23:27.879346    4610 cni.go:84] Creating CNI manager for ""
	I0919 12:23:27.879356    4610 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:23:27.885735    4610 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 12:23:27.888842    4610 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 12:23:27.892407    4610 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 12:23:27.897662    4610 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 12:23:27.897721    4610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 12:23:27.897774    4610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-356000 minikube.k8s.io/updated_at=2024_09_19T12_23_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=running-upgrade-356000 minikube.k8s.io/primary=true
	I0919 12:23:27.942459    4610 ops.go:34] apiserver oom_adj: -16
	I0919 12:23:27.942475    4610 kubeadm.go:1113] duration metric: took 44.809ms to wait for elevateKubeSystemPrivileges
	I0919 12:23:27.942487    4610 kubeadm.go:394] duration metric: took 4m12.310354709s to StartCluster
	I0919 12:23:27.942496    4610 settings.go:142] acquiring lock: {Name:mk40c96dc3647741b89517369d27068bccfc0e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:23:27.942600    4610 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:23:27.942995    4610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/kubeconfig: {Name:mk8a8f1f5779f30829ec51973ad05815f1640da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:23:27.943228    4610 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:23:27.943249    4610 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 12:23:27.943306    4610 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-356000"
	I0919 12:23:27.943313    4610 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-356000"
	W0919 12:23:27.943316    4610 addons.go:243] addon storage-provisioner should already be in state true
	I0919 12:23:27.943327    4610 host.go:66] Checking if "running-upgrade-356000" exists ...
	I0919 12:23:27.943329    4610 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-356000"
	I0919 12:23:27.943363    4610 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-356000"
	I0919 12:23:27.943340    4610 config.go:182] Loaded profile config "running-upgrade-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:23:27.943591    4610 retry.go:31] will retry after 690.213829ms: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/running-upgrade-356000/monitor: connect: connection refused
	I0919 12:23:27.944313    4610 kapi.go:59] client config for running-upgrade-356000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/client.key", CAFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1025a5800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 12:23:27.944434    4610 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-356000"
	W0919 12:23:27.944439    4610 addons.go:243] addon default-storageclass should already be in state true
	I0919 12:23:27.944447    4610 host.go:66] Checking if "running-upgrade-356000" exists ...
	I0919 12:23:27.944966    4610 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 12:23:27.944972    4610 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 12:23:27.944977    4610 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/running-upgrade-356000/id_rsa Username:docker}
	I0919 12:23:27.946829    4610 out.go:177] * Verifying Kubernetes components...
	I0919 12:23:27.953769    4610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:23:28.035740    4610 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 12:23:28.041291    4610 api_server.go:52] waiting for apiserver process to appear ...
	I0919 12:23:28.041346    4610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:23:28.045424    4610 api_server.go:72] duration metric: took 102.188208ms to wait for apiserver process to appear ...
	I0919 12:23:28.045432    4610 api_server.go:88] waiting for apiserver healthz status ...
	I0919 12:23:28.045439    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:28.098633    4610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 12:23:28.396943    4610 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 12:23:28.396957    4610 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 12:23:28.638899    4610 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:23:28.643198    4610 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 12:23:28.643206    4610 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 12:23:28.643214    4610 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/running-upgrade-356000/id_rsa Username:docker}
	I0919 12:23:28.682350    4610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 12:23:33.047119    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:33.047168    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:38.047383    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:38.047424    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:43.047691    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:43.047715    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:48.047945    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:48.047984    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:53.048361    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:53.048393    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:58.048849    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:58.048886    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0919 12:23:58.398387    4610 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0919 12:23:58.402661    4610 out.go:177] * Enabled addons: storage-provisioner
	I0919 12:23:58.410419    4610 addons.go:510] duration metric: took 30.46801025s for enable addons: enabled=[storage-provisioner]
	I0919 12:24:03.046346    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:03.046396    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:08.043737    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:08.043773    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:13.042444    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:13.042488    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:18.042250    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:18.042309    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:23.043142    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:23.043193    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:28.044430    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:28.044653    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:28.061154    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:24:28.061256    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:28.081802    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:24:28.081896    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:28.103541    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:24:28.103620    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:28.116647    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:24:28.116734    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:28.127147    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:24:28.127239    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:28.138154    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:24:28.138232    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:28.149266    4610 logs.go:276] 0 containers: []
	W0919 12:24:28.149278    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:28.149350    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:28.159718    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:24:28.159735    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:28.159740    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:28.193549    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:24:28.193557    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:24:28.207506    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:24:28.207516    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:24:28.219731    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:24:28.219740    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:24:28.234529    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:24:28.234544    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:24:28.253386    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:28.253400    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:28.278730    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:24:28.278738    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:28.290206    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:28.290217    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:28.295344    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:28.295350    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:28.331650    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:24:28.331660    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:24:28.345822    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:24:28.345832    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:24:28.357688    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:24:28.357697    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:24:28.369457    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:24:28.369469    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:24:30.881390    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:35.882956    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:35.883169    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:35.898822    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:24:35.898918    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:35.911135    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:24:35.911226    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:35.921925    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:24:35.922002    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:35.933393    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:24:35.933481    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:35.943651    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:24:35.943726    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:35.953658    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:24:35.953742    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:35.963909    4610 logs.go:276] 0 containers: []
	W0919 12:24:35.963919    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:35.963988    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:35.974619    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:24:35.974635    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:24:35.974640    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:35.987162    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:35.987172    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:35.991940    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:24:35.991946    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:24:36.003647    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:24:36.003657    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:24:36.018746    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:24:36.018758    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:24:36.030353    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:24:36.030365    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:24:36.047686    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:36.047697    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:36.072407    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:36.072417    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:36.107702    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:36.107713    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:36.142757    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:24:36.142768    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:24:36.156546    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:24:36.156560    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:24:36.171243    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:24:36.171253    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:24:36.183275    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:24:36.183287    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:24:38.697156    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:43.698941    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:43.699164    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:43.724075    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:24:43.724195    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:43.740927    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:24:43.741032    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:43.755996    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:24:43.756096    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:43.769896    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:24:43.769971    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:43.780285    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:24:43.780357    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:43.790505    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:24:43.790580    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:43.801017    4610 logs.go:276] 0 containers: []
	W0919 12:24:43.801029    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:43.801101    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:43.811246    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:24:43.811261    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:43.811267    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:43.845762    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:43.845774    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:43.850052    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:24:43.850058    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:24:43.863808    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:24:43.863820    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:24:43.877873    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:24:43.877883    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:24:43.889305    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:24:43.889315    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:24:43.900579    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:24:43.900588    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:24:43.915149    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:24:43.915157    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:24:43.927199    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:24:43.927209    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:24:43.945235    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:24:43.945245    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:43.957529    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:43.957541    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:43.991962    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:24:43.991973    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:24:44.003381    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:44.003394    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:46.528702    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:51.529307    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:51.529559    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:51.547165    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:24:51.547261    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:51.560376    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:24:51.560470    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:51.572157    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:24:51.572230    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:51.583029    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:24:51.583097    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:51.593530    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:24:51.593614    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:51.604021    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:24:51.604111    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:51.614580    4610 logs.go:276] 0 containers: []
	W0919 12:24:51.614592    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:51.614665    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:51.628902    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:24:51.628916    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:24:51.628922    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:24:51.640493    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:51.640503    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:51.645312    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:51.645322    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:51.682188    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:24:51.682198    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:24:51.696209    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:24:51.696221    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:24:51.710554    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:24:51.710566    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:24:51.725384    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:24:51.725394    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:24:51.737386    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:24:51.737398    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:24:51.754231    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:51.754246    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:51.779380    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:51.779394    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:51.814864    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:24:51.814874    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:24:51.831512    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:24:51.831528    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:24:51.849125    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:24:51.849136    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:54.362150    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:59.364173    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:59.364336    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:59.377024    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:24:59.377113    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:59.387846    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:24:59.387940    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:59.398745    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:24:59.398834    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:59.409431    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:24:59.409520    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:59.419695    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:24:59.419784    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:59.431322    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:24:59.431402    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:59.441851    4610 logs.go:276] 0 containers: []
	W0919 12:24:59.441862    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:59.441931    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:59.452074    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:24:59.452090    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:59.452096    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:59.486971    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:24:59.486979    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:24:59.503900    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:24:59.503910    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:24:59.515096    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:24:59.515107    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:24:59.530244    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:24:59.530258    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:24:59.542109    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:24:59.542123    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:24:59.559216    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:59.559226    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:59.585876    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:24:59.585889    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:59.596933    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:59.596948    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:59.601552    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:59.601561    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:59.639402    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:24:59.639415    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:24:59.653565    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:24:59.653578    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:24:59.665159    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:24:59.665169    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:02.179072    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:07.181454    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:07.181772    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:07.213116    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:07.213259    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:07.230793    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:07.230903    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:07.243889    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:25:07.243978    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:07.254960    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:07.255047    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:07.265619    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:07.265704    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:07.275952    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:07.276037    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:07.286526    4610 logs.go:276] 0 containers: []
	W0919 12:25:07.286536    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:07.286609    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:07.297369    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:07.297384    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:07.297389    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:07.310178    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:07.310188    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:07.322019    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:07.322030    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:07.336397    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:07.336407    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:07.349115    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:07.349125    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:07.373359    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:07.373372    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:07.408521    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:07.408532    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:07.444210    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:07.444222    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:07.458715    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:07.458725    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:07.471332    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:07.471341    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:07.495911    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:07.495918    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:07.506938    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:07.506948    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:07.511961    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:07.511968    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:10.028341    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:15.028780    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:15.029059    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:15.049741    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:15.049856    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:15.064599    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:15.064692    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:15.077101    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:25:15.077189    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:15.088445    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:15.088530    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:15.099151    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:15.099234    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:15.109670    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:15.109756    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:15.119485    4610 logs.go:276] 0 containers: []
	W0919 12:25:15.119496    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:15.119559    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:15.129777    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:15.129793    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:15.129799    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:15.134744    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:15.134752    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:15.148775    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:15.148786    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:15.163376    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:15.163387    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:15.175036    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:15.175046    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:15.189927    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:15.189937    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:15.202192    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:15.202202    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:15.217455    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:15.217468    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:15.251661    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:15.251676    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:15.266051    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:15.266062    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:15.283426    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:15.283437    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:15.307657    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:15.307668    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:15.319559    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:15.319572    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:17.857003    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:22.859166    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:22.859338    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:22.870898    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:22.870987    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:22.881999    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:22.882080    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:22.896811    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:25:22.896891    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:22.908008    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:22.908094    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:22.918661    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:22.918742    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:22.930297    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:22.930371    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:22.940203    4610 logs.go:276] 0 containers: []
	W0919 12:25:22.940215    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:22.940287    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:22.950742    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:22.950758    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:22.950763    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:22.965615    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:22.965625    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:22.977359    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:22.977369    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:22.998646    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:22.998654    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:23.011474    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:23.011485    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:23.036034    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:23.036049    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:23.068986    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:23.068994    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:23.073144    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:23.073150    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:23.087019    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:23.087033    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:23.099119    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:23.099136    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:23.110906    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:23.110918    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:23.145915    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:23.145925    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:23.160811    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:23.160821    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:25.679759    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:30.680131    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:30.680240    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:30.691193    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:30.691272    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:30.701887    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:30.701976    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:30.712359    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:25:30.712440    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:30.723149    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:30.723233    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:30.734250    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:30.734333    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:30.744956    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:30.745037    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:30.755349    4610 logs.go:276] 0 containers: []
	W0919 12:25:30.755363    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:30.755434    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:30.765861    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:30.765876    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:30.765882    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:30.801278    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:30.801288    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:30.816718    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:30.816729    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:30.829611    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:30.829626    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:30.841217    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:30.841230    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:30.858409    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:30.858421    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:30.869455    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:30.869465    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:30.893328    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:30.893337    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:30.897685    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:30.897691    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:30.909397    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:30.909412    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:30.923779    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:30.923792    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:30.935339    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:30.935350    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:30.950138    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:30.950147    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:33.485693    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:38.487733    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:38.487845    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:38.499647    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:38.499737    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:38.510775    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:38.510860    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:38.523205    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:25:38.523289    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:38.534647    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:38.534735    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:38.546539    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:38.546633    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:38.558929    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:38.559010    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:38.569516    4610 logs.go:276] 0 containers: []
	W0919 12:25:38.569530    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:38.569602    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:38.580240    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:38.580254    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:38.580260    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:38.615519    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:38.615531    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:38.627837    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:38.627851    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:38.643396    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:38.643407    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:38.654784    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:38.654795    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:38.665970    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:38.665981    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:38.684028    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:38.684039    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:38.707247    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:38.707254    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:38.741064    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:38.741078    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:38.745457    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:38.745466    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:38.759614    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:38.759628    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:38.773597    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:38.773612    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:38.785656    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:38.785671    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:41.299152    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:46.299322    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:46.299428    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:46.311664    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:46.311754    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:46.324123    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:46.324212    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:46.338985    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:25:46.339077    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:46.350048    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:46.350136    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:46.362098    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:46.362179    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:46.373613    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:46.373704    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:46.385056    4610 logs.go:276] 0 containers: []
	W0919 12:25:46.385069    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:46.385147    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:46.396368    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:46.396387    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:46.396393    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:46.433730    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:46.433742    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:46.449728    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:25:46.449739    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:25:46.461751    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:25:46.461760    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:25:46.474151    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:46.474163    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:46.491450    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:46.491463    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:46.502966    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:46.502981    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:46.536334    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:46.536344    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:46.558456    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:46.558471    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:46.570512    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:46.570522    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:46.583297    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:46.583308    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:46.595115    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:46.595128    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:46.607072    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:46.607087    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:46.632042    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:46.632049    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:46.636367    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:46.636377    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:49.153111    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:54.155225    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:54.155327    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:54.167217    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:54.167305    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:54.178643    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:54.178719    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:54.189702    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:25:54.189812    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:54.204540    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:54.204622    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:54.215611    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:54.215691    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:54.227643    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:54.227728    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:54.243627    4610 logs.go:276] 0 containers: []
	W0919 12:25:54.243639    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:54.243719    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:54.254593    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:54.254610    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:54.254616    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:54.259489    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:54.259498    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:54.296669    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:54.296682    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:54.311753    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:54.311762    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:54.324769    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:54.324781    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:54.341279    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:54.341288    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:54.358000    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:54.358012    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:54.370641    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:54.370651    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:54.407920    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:25:54.407937    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:25:54.419458    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:54.419469    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:54.443300    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:54.443317    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:54.461260    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:54.461271    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:54.475573    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:25:54.475584    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:25:54.487666    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:54.487682    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:54.502407    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:54.502420    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:57.016433    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:02.018821    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:02.018923    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:02.031007    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:02.031091    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:02.042439    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:02.042524    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:02.054114    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:02.054200    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:02.071586    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:02.071668    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:02.083360    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:02.083439    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:02.094233    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:02.094317    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:02.105272    4610 logs.go:276] 0 containers: []
	W0919 12:26:02.105287    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:02.105360    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:02.116433    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:02.116451    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:02.116456    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:02.132109    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:02.132120    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:02.144626    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:02.144638    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:02.158765    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:02.158778    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:02.171654    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:02.171668    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:02.206945    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:02.206961    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:02.251157    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:02.251168    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:02.278278    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:02.278294    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:02.283587    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:02.283598    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:02.296211    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:02.296224    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:02.314614    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:02.314625    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:02.328382    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:02.328396    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:02.342512    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:02.342525    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:02.359239    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:02.359251    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:02.374595    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:02.374606    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:04.887973    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:09.890020    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:09.890136    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:09.901702    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:09.901794    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:09.913678    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:09.913762    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:09.926649    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:09.926742    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:09.938864    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:09.938946    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:09.950371    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:09.950453    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:09.962197    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:09.962283    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:09.973952    4610 logs.go:276] 0 containers: []
	W0919 12:26:09.973965    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:09.974042    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:09.985969    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:09.985987    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:09.985993    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:09.990878    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:09.990887    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:10.003709    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:10.003721    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:10.016157    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:10.016172    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:10.043711    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:10.043727    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:10.086422    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:10.086436    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:10.112698    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:10.112714    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:10.133149    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:10.133161    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:10.145914    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:10.145927    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:10.161699    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:10.161715    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:10.178147    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:10.178160    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:10.191069    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:10.191080    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:10.203029    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:10.203040    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:10.240607    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:10.240629    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:10.253207    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:10.253218    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:12.767702    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:17.769887    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:17.770120    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:17.790994    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:17.791108    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:17.805471    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:17.805566    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:17.831761    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:17.831858    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:17.844708    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:17.844779    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:17.856619    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:17.856697    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:17.868018    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:17.868099    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:17.879576    4610 logs.go:276] 0 containers: []
	W0919 12:26:17.879587    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:17.879655    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:17.891566    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:17.891584    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:17.891590    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:17.926764    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:17.926777    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:17.941507    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:17.941517    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:17.961247    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:17.961257    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:17.975042    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:17.975055    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:17.990986    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:17.990998    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:18.016624    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:18.016633    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:18.030112    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:18.030123    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:18.068258    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:18.068270    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:18.080504    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:18.080514    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:18.093321    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:18.093332    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:18.106041    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:18.106056    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:18.110844    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:18.110858    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:18.126677    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:18.126693    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:18.139073    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:18.139088    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:20.654728    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:25.657203    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:25.657647    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:25.690090    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:25.690251    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:25.708615    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:25.708734    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:25.722575    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:25.722680    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:25.734906    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:25.734998    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:25.747607    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:25.747697    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:25.759844    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:25.759931    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:25.772152    4610 logs.go:276] 0 containers: []
	W0919 12:26:25.772164    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:25.772244    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:25.783799    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:25.783816    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:25.783822    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:25.802045    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:25.802057    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:25.819297    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:25.819310    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:25.846630    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:25.846644    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:25.861458    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:25.861467    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:25.866827    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:25.866836    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:25.902836    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:25.902846    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:25.915279    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:25.915291    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:25.931538    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:25.931555    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:25.944949    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:25.944960    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:25.958346    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:25.958358    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:25.981939    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:25.981947    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:25.994919    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:25.994929    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:26.010004    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:26.010016    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:26.047802    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:26.047823    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:28.570350    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:33.572966    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:33.573603    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:33.615346    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:33.615506    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:33.638344    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:33.638468    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:33.657680    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:33.657777    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:33.676902    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:33.676991    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:33.688745    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:33.688832    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:33.700967    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:33.701057    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:33.712379    4610 logs.go:276] 0 containers: []
	W0919 12:26:33.712393    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:33.712465    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:33.724608    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:33.724628    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:33.724634    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:33.764497    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:33.764509    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:33.777206    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:33.777219    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:33.793615    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:33.793628    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:33.806500    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:33.806512    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:33.822202    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:33.822216    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:33.838540    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:33.838549    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:33.865193    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:33.865207    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:33.885223    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:33.885234    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:33.891978    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:33.891989    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:33.904195    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:33.904205    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:33.916633    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:33.916644    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:33.929371    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:33.929384    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:33.965249    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:33.965262    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:33.979102    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:33.979114    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:36.500022    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:41.501206    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:41.501487    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:41.528003    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:41.528123    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:41.542838    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:41.542919    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:41.555720    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:41.555814    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:41.566527    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:41.566615    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:41.576943    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:41.576986    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:41.588520    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:41.588568    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:41.599935    4610 logs.go:276] 0 containers: []
	W0919 12:26:41.599949    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:41.600028    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:41.611496    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:41.611516    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:41.611522    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:41.647681    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:41.647703    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:41.673524    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:41.673536    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:41.689947    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:41.689961    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:41.704194    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:41.704207    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:41.718384    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:41.718396    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:41.735004    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:41.735017    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:41.748784    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:41.748797    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:41.762176    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:41.762187    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:41.783324    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:41.783340    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:41.796622    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:41.796633    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:41.801938    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:41.801947    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:41.841290    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:41.841301    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:41.856418    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:41.856428    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:41.873469    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:41.873481    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:44.399448    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:49.401633    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:49.401939    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:49.429286    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:49.429439    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:49.447099    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:49.447219    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:49.460676    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:49.460769    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:49.471835    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:49.471919    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:49.483069    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:49.483139    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:49.493830    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:49.493913    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:49.503906    4610 logs.go:276] 0 containers: []
	W0919 12:26:49.503920    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:49.503993    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:49.514188    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:49.514207    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:49.514213    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:49.527418    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:49.527429    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:49.539834    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:49.539846    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:49.552747    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:49.552759    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:49.572777    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:49.572786    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:49.599709    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:49.599723    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:49.604847    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:49.604857    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:49.643106    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:49.643119    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:49.658814    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:49.658824    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:49.695656    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:49.695675    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:49.710593    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:49.710614    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:49.723170    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:49.723182    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:49.736441    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:49.736452    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:49.750272    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:49.750284    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:49.763334    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:49.763346    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:52.284428    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:57.285985    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:57.286267    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:57.306475    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:57.306601    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:57.321713    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:57.321801    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:57.334270    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:57.334363    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:57.344994    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:57.345088    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:57.355613    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:57.355698    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:57.366115    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:57.366200    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:57.376402    4610 logs.go:276] 0 containers: []
	W0919 12:26:57.376413    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:57.376485    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:57.387165    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:57.387181    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:57.387186    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:57.402273    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:57.402284    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:57.435445    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:57.435455    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:57.447088    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:57.447100    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:57.459999    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:57.460011    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:57.472702    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:57.472715    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:57.488283    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:57.488295    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:57.501569    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:57.501584    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:57.514179    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:57.514189    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:57.551769    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:57.551785    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:57.566865    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:57.566875    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:57.582414    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:57.582425    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:57.599714    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:57.599733    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:57.626239    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:57.626255    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:57.631422    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:57.631434    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:27:00.149610    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:05.150724    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:05.150856    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:27:05.161993    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:27:05.162086    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:27:05.176310    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:27:05.176402    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:27:05.187191    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:27:05.187274    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:27:05.198287    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:27:05.198377    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:27:05.208566    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:27:05.208641    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:27:05.219272    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:27:05.219357    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:27:05.230055    4610 logs.go:276] 0 containers: []
	W0919 12:27:05.230070    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:27:05.230146    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:27:05.240730    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:27:05.240750    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:27:05.240755    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:27:05.252906    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:27:05.252919    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:27:05.264484    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:27:05.264494    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:27:05.281178    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:27:05.281190    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:27:05.293977    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:27:05.293989    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:27:05.312556    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:27:05.312567    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:27:05.317265    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:27:05.317273    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:27:05.331853    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:27:05.331862    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:27:05.344316    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:27:05.344328    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:27:05.360149    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:27:05.360160    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:27:05.396734    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:27:05.396746    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:27:05.408745    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:27:05.408756    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:27:05.432033    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:27:05.432040    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:27:05.465658    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:27:05.465680    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:27:05.480670    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:27:05.480685    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:27:07.996607    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:12.998702    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:12.998908    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:27:13.011272    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:27:13.011367    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:27:13.021857    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:27:13.021937    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:27:13.033494    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:27:13.033585    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:27:13.044096    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:27:13.044179    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:27:13.055052    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:27:13.055142    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:27:13.065208    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:27:13.065304    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:27:13.075777    4610 logs.go:276] 0 containers: []
	W0919 12:27:13.075789    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:27:13.075868    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:27:13.086299    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:27:13.086321    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:27:13.086326    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:27:13.119624    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:27:13.119632    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:27:13.131516    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:27:13.131527    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:27:13.143181    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:27:13.143192    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:27:13.156262    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:27:13.156275    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:27:13.167758    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:27:13.167766    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:27:13.203061    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:27:13.203078    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:27:13.217164    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:27:13.217176    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:27:13.229149    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:27:13.229165    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:27:13.252790    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:27:13.252802    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:27:13.264069    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:27:13.264080    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:27:13.268783    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:27:13.268789    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:27:13.284146    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:27:13.284156    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:27:13.299388    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:27:13.299397    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:27:13.311177    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:27:13.311187    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:27:15.830718    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:20.832793    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:20.832966    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:27:20.847056    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:27:20.847145    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:27:20.858941    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:27:20.859036    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:27:20.871209    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:27:20.871302    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:27:20.883429    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:27:20.883522    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:27:20.895337    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:27:20.895432    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:27:20.907442    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:27:20.907534    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:27:20.925846    4610 logs.go:276] 0 containers: []
	W0919 12:27:20.925859    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:27:20.925935    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:27:20.936370    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:27:20.936387    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:27:20.936393    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:27:20.972463    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:27:20.972474    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:27:20.977020    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:27:20.977027    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:27:20.989262    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:27:20.989273    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:27:21.003931    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:27:21.003941    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:27:21.016052    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:27:21.016063    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:27:21.050974    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:27:21.050990    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:27:21.065488    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:27:21.065499    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:27:21.080071    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:27:21.080082    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:27:21.091570    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:27:21.091580    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:27:21.103062    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:27:21.103073    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:27:21.120808    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:27:21.120818    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:27:21.144873    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:27:21.144888    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:27:21.159771    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:27:21.159782    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:27:21.171868    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:27:21.171880    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:27:23.685071    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:28.687317    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:28.692711    4610 out.go:201] 
	W0919 12:27:28.695886    4610 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0919 12:27:28.695896    4610 out.go:270] * 
	* 
	W0919 12:27:28.696727    4610 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:27:28.707859    4610 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-356000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-19 12:27:28.80054 -0700 PDT m=+2955.355129543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-356000 -n running-upgrade-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-356000 -n running-upgrade-356000: exit status 2 (15.591985375s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-356000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-612000          | force-systemd-flag-612000 | jenkins | v1.34.0 | 19 Sep 24 12:17 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-722000              | force-systemd-env-722000  | jenkins | v1.34.0 | 19 Sep 24 12:17 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-722000           | force-systemd-env-722000  | jenkins | v1.34.0 | 19 Sep 24 12:17 PDT | 19 Sep 24 12:17 PDT |
	| start   | -p docker-flags-971000                | docker-flags-971000       | jenkins | v1.34.0 | 19 Sep 24 12:17 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-612000             | force-systemd-flag-612000 | jenkins | v1.34.0 | 19 Sep 24 12:17 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-612000          | force-systemd-flag-612000 | jenkins | v1.34.0 | 19 Sep 24 12:17 PDT | 19 Sep 24 12:17 PDT |
	| start   | -p cert-expiration-814000             | cert-expiration-814000    | jenkins | v1.34.0 | 19 Sep 24 12:17 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-971000 ssh               | docker-flags-971000       | jenkins | v1.34.0 | 19 Sep 24 12:18 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-971000 ssh               | docker-flags-971000       | jenkins | v1.34.0 | 19 Sep 24 12:18 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-971000                | docker-flags-971000       | jenkins | v1.34.0 | 19 Sep 24 12:18 PDT | 19 Sep 24 12:18 PDT |
	| start   | -p cert-options-665000                | cert-options-665000       | jenkins | v1.34.0 | 19 Sep 24 12:18 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-665000 ssh               | cert-options-665000       | jenkins | v1.34.0 | 19 Sep 24 12:18 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-665000 -- sudo        | cert-options-665000       | jenkins | v1.34.0 | 19 Sep 24 12:18 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-665000                | cert-options-665000       | jenkins | v1.34.0 | 19 Sep 24 12:18 PDT | 19 Sep 24 12:18 PDT |
	| start   | -p running-upgrade-356000             | minikube                  | jenkins | v1.26.0 | 19 Sep 24 12:18 PDT | 19 Sep 24 12:19 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-356000             | running-upgrade-356000    | jenkins | v1.34.0 | 19 Sep 24 12:19 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-814000             | cert-expiration-814000    | jenkins | v1.34.0 | 19 Sep 24 12:21 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-814000             | cert-expiration-814000    | jenkins | v1.34.0 | 19 Sep 24 12:21 PDT | 19 Sep 24 12:21 PDT |
	| start   | -p kubernetes-upgrade-186000          | kubernetes-upgrade-186000 | jenkins | v1.34.0 | 19 Sep 24 12:21 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-186000          | kubernetes-upgrade-186000 | jenkins | v1.34.0 | 19 Sep 24 12:21 PDT | 19 Sep 24 12:21 PDT |
	| start   | -p kubernetes-upgrade-186000          | kubernetes-upgrade-186000 | jenkins | v1.34.0 | 19 Sep 24 12:21 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-186000          | kubernetes-upgrade-186000 | jenkins | v1.34.0 | 19 Sep 24 12:21 PDT | 19 Sep 24 12:21 PDT |
	| start   | -p stopped-upgrade-269000             | minikube                  | jenkins | v1.26.0 | 19 Sep 24 12:21 PDT | 19 Sep 24 12:22 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-269000 stop           | minikube                  | jenkins | v1.26.0 | 19 Sep 24 12:22 PDT | 19 Sep 24 12:22 PDT |
	| start   | -p stopped-upgrade-269000             | stopped-upgrade-269000    | jenkins | v1.34.0 | 19 Sep 24 12:22 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 12:22:24
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 12:22:24.566595    4788 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:22:24.566758    4788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:22:24.566764    4788 out.go:358] Setting ErrFile to fd 2...
	I0919 12:22:24.566767    4788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:22:24.566914    4788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:22:24.568094    4788 out.go:352] Setting JSON to false
	I0919 12:22:24.588100    4788 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3109,"bootTime":1726770635,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:22:24.588178    4788 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:22:24.593475    4788 out.go:177] * [stopped-upgrade-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:22:24.601547    4788 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:22:24.601660    4788 notify.go:220] Checking for updates...
	I0919 12:22:24.607462    4788 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:22:24.610430    4788 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:22:24.613477    4788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:22:24.614581    4788 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:22:24.617503    4788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:22:24.620774    4788 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:22:24.624492    4788 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0919 12:22:24.627479    4788 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:22:24.631466    4788 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:22:24.638481    4788 start.go:297] selected driver: qemu2
	I0919 12:22:24.638490    4788 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0919 12:22:24.638547    4788 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:22:24.641566    4788 cni.go:84] Creating CNI manager for ""
	I0919 12:22:24.641601    4788 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:22:24.641620    4788 start.go:340] cluster config:
	{Name:stopped-upgrade-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0919 12:22:24.641686    4788 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:22:24.648472    4788 out.go:177] * Starting "stopped-upgrade-269000" primary control-plane node in "stopped-upgrade-269000" cluster
	I0919 12:22:24.652454    4788 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0919 12:22:24.652487    4788 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0919 12:22:24.652493    4788 cache.go:56] Caching tarball of preloaded images
	I0919 12:22:24.652574    4788 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:22:24.652581    4788 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0919 12:22:24.652639    4788 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/config.json ...
	I0919 12:22:24.653031    4788 start.go:360] acquireMachinesLock for stopped-upgrade-269000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:22:24.653062    4788 start.go:364] duration metric: took 22.542µs to acquireMachinesLock for "stopped-upgrade-269000"
	I0919 12:22:24.653071    4788 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:22:24.653079    4788 fix.go:54] fixHost starting: 
	I0919 12:22:24.653206    4788 fix.go:112] recreateIfNeeded on stopped-upgrade-269000: state=Stopped err=<nil>
	W0919 12:22:24.653215    4788 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:22:24.661432    4788 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-269000" ...
	I0919 12:22:24.569736    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:24.569815    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:24.582159    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:24.582238    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:24.593014    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:24.593087    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:24.603635    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:24.603699    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:24.613837    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:24.613916    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:24.624507    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:24.624587    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:24.636039    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:24.636115    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:24.646639    4610 logs.go:276] 0 containers: []
	W0919 12:22:24.646649    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:24.646715    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:24.657186    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:24.657201    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:24.657207    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:24.693738    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:24.693749    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:24.715360    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:24.715370    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:24.738166    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:24.738175    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:24.750428    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:24.750444    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:24.764687    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:24.764697    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:24.779463    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:24.779475    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:24.792423    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:24.792438    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:24.813014    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:24.813027    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:24.826399    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:24.826415    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:24.838932    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:24.838944    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:24.850731    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:24.850741    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:24.874459    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:24.874475    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:24.887179    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:24.887189    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:24.924653    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:24.924673    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:24.929670    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:24.929678    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:24.944218    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:24.944233    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:27.461648    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:24.665447    4788 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:22:24.665550    4788 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50504-:22,hostfwd=tcp::50505-:2376,hostname=stopped-upgrade-269000 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/disk.qcow2
	I0919 12:22:24.712880    4788 main.go:141] libmachine: STDOUT: 
	I0919 12:22:24.712924    4788 main.go:141] libmachine: STDERR: 
	I0919 12:22:24.712932    4788 main.go:141] libmachine: Waiting for VM to start (ssh -p 50504 docker@127.0.0.1)...
	I0919 12:22:32.464275    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:32.464492    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:32.476411    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:32.476508    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:32.487880    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:32.487976    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:32.501100    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:32.501184    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:32.511520    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:32.511604    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:32.526530    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:32.526617    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:32.537828    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:32.537910    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:32.548333    4610 logs.go:276] 0 containers: []
	W0919 12:22:32.548344    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:32.548416    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:32.560399    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:32.560419    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:32.560425    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:32.572550    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:32.572561    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:32.584085    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:32.584097    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:32.619735    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:32.619742    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:32.624143    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:32.624149    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:32.659523    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:32.659534    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:32.674432    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:32.674443    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:32.688675    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:32.688686    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:32.708054    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:32.708071    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:32.723078    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:32.723088    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:32.748140    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:32.748147    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:32.761051    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:32.761063    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:32.773927    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:32.773938    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:32.792474    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:32.792490    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:32.804788    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:32.804798    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:32.817048    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:32.817058    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:32.837129    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:32.837139    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:35.349855    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:40.352046    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:40.352506    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:40.387500    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:40.387656    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:40.410428    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:40.410544    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:40.424399    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:40.424491    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:40.437958    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:40.438047    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:40.447974    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:40.448060    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:40.458791    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:40.458878    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:40.468964    4610 logs.go:276] 0 containers: []
	W0919 12:22:40.468977    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:40.469043    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:40.480260    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:40.480286    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:40.480295    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:40.521455    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:40.521468    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:40.539489    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:40.539500    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:40.550665    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:40.550676    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:40.567393    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:40.567404    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:40.578540    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:40.578554    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:40.601156    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:40.601163    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:40.605153    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:40.605162    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:40.619448    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:40.619461    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:40.631569    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:40.631579    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:40.642190    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:40.642200    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:40.653947    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:40.653961    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:40.687834    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:40.687844    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:40.710620    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:40.710630    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:40.725957    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:40.725968    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:40.739823    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:40.739835    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:40.755897    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:40.755908    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:44.319803    4788 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/config.json ...
	I0919 12:22:44.320374    4788 machine.go:93] provisionDockerMachine start ...
	I0919 12:22:44.320493    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:44.320786    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:44.320802    4788 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 12:22:44.386927    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0919 12:22:44.386947    4788 buildroot.go:166] provisioning hostname "stopped-upgrade-269000"
	I0919 12:22:44.387008    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:44.387127    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:44.387134    4788 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-269000 && echo "stopped-upgrade-269000" | sudo tee /etc/hostname
	I0919 12:22:44.446336    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-269000
	
	I0919 12:22:44.446398    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:44.446508    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:44.446516    4788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-269000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-269000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-269000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 12:22:44.506800    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 12:22:44.506814    4788 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19664-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19664-1099/.minikube}
	I0919 12:22:44.506823    4788 buildroot.go:174] setting up certificates
	I0919 12:22:44.506827    4788 provision.go:84] configureAuth start
	I0919 12:22:44.506838    4788 provision.go:143] copyHostCerts
	I0919 12:22:44.506924    4788 exec_runner.go:144] found /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.pem, removing ...
	I0919 12:22:44.506931    4788 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.pem
	I0919 12:22:44.507099    4788 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.pem (1078 bytes)
	I0919 12:22:44.507274    4788 exec_runner.go:144] found /Users/jenkins/minikube-integration/19664-1099/.minikube/cert.pem, removing ...
	I0919 12:22:44.507277    4788 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19664-1099/.minikube/cert.pem
	I0919 12:22:44.507329    4788 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19664-1099/.minikube/cert.pem (1123 bytes)
	I0919 12:22:44.507432    4788 exec_runner.go:144] found /Users/jenkins/minikube-integration/19664-1099/.minikube/key.pem, removing ...
	I0919 12:22:44.507435    4788 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19664-1099/.minikube/key.pem
	I0919 12:22:44.507472    4788 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19664-1099/.minikube/key.pem (1679 bytes)
	I0919 12:22:44.507567    4788 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-269000 san=[127.0.0.1 localhost minikube stopped-upgrade-269000]
	I0919 12:22:43.267983    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:44.599769    4788 provision.go:177] copyRemoteCerts
	I0919 12:22:44.599820    4788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 12:22:44.599827    4788 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/id_rsa Username:docker}
	I0919 12:22:44.631068    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 12:22:44.637776    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0919 12:22:44.644792    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 12:22:44.651862    4788 provision.go:87] duration metric: took 145.027041ms to configureAuth
	I0919 12:22:44.651872    4788 buildroot.go:189] setting minikube options for container-runtime
	I0919 12:22:44.651987    4788 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:22:44.652035    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:44.652127    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:44.652131    4788 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 12:22:44.708464    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 12:22:44.708472    4788 buildroot.go:70] root file system type: tmpfs
	I0919 12:22:44.708528    4788 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 12:22:44.708574    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:44.708680    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:44.708713    4788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 12:22:44.768708    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 12:22:44.768777    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:44.768885    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:44.768896    4788 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 12:22:45.132894    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0919 12:22:45.132914    4788 machine.go:96] duration metric: took 812.547833ms to provisionDockerMachine
	I0919 12:22:45.132922    4788 start.go:293] postStartSetup for "stopped-upgrade-269000" (driver="qemu2")
	I0919 12:22:45.132929    4788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 12:22:45.132987    4788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 12:22:45.132996    4788 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/id_rsa Username:docker}
	I0919 12:22:45.163204    4788 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 12:22:45.164444    4788 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 12:22:45.164451    4788 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19664-1099/.minikube/addons for local assets ...
	I0919 12:22:45.164522    4788 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19664-1099/.minikube/files for local assets ...
	I0919 12:22:45.164642    4788 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/ssl/certs/16182.pem -> 16182.pem in /etc/ssl/certs
	I0919 12:22:45.164744    4788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 12:22:45.167792    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/ssl/certs/16182.pem --> /etc/ssl/certs/16182.pem (1708 bytes)
	I0919 12:22:45.174493    4788 start.go:296] duration metric: took 41.564458ms for postStartSetup
	I0919 12:22:45.174508    4788 fix.go:56] duration metric: took 20.521994166s for fixHost
	I0919 12:22:45.174550    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:45.174654    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:45.174661    4788 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 12:22:45.232183    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726773765.728006837
	
	I0919 12:22:45.232193    4788 fix.go:216] guest clock: 1726773765.728006837
	I0919 12:22:45.232197    4788 fix.go:229] Guest: 2024-09-19 12:22:45.728006837 -0700 PDT Remote: 2024-09-19 12:22:45.17451 -0700 PDT m=+20.635819501 (delta=553.496837ms)
	I0919 12:22:45.232210    4788 fix.go:200] guest clock delta is within tolerance: 553.496837ms
	I0919 12:22:45.232212    4788 start.go:83] releasing machines lock for "stopped-upgrade-269000", held for 20.579709s
	I0919 12:22:45.232291    4788 ssh_runner.go:195] Run: cat /version.json
	I0919 12:22:45.232294    4788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 12:22:45.232300    4788 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/id_rsa Username:docker}
	I0919 12:22:45.232312    4788 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/id_rsa Username:docker}
	W0919 12:22:45.232875    4788 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50504: connect: connection refused
	I0919 12:22:45.232900    4788 retry.go:31] will retry after 325.814447ms: dial tcp [::1]:50504: connect: connection refused
	W0919 12:22:45.617002    4788 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0919 12:22:45.617116    4788 ssh_runner.go:195] Run: systemctl --version
	I0919 12:22:45.620547    4788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 12:22:45.623381    4788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 12:22:45.623426    4788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0919 12:22:45.628277    4788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0919 12:22:45.639113    4788 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 12:22:45.639126    4788 start.go:495] detecting cgroup driver to use...
	I0919 12:22:45.639219    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 12:22:45.648756    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0919 12:22:45.652306    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 12:22:45.657354    4788 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 12:22:45.657426    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 12:22:45.661626    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 12:22:45.666069    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 12:22:45.669778    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 12:22:45.673044    4788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 12:22:45.676750    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 12:22:45.680265    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 12:22:45.683290    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 12:22:45.686080    4788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 12:22:45.688762    4788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 12:22:45.691874    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:22:45.762286    4788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 12:22:45.768318    4788 start.go:495] detecting cgroup driver to use...
	I0919 12:22:45.768395    4788 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 12:22:45.776392    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 12:22:45.781162    4788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 12:22:45.788073    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 12:22:45.792208    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 12:22:45.797108    4788 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 12:22:45.844782    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 12:22:45.849981    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 12:22:45.855247    4788 ssh_runner.go:195] Run: which cri-dockerd
	I0919 12:22:45.856613    4788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 12:22:45.859648    4788 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 12:22:45.864646    4788 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 12:22:45.944421    4788 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 12:22:46.026182    4788 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 12:22:46.026238    4788 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0919 12:22:46.031237    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:22:46.108000    4788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 12:22:47.262057    4788 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154071875s)
	I0919 12:22:47.262131    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 12:22:47.266392    4788 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 12:22:47.272620    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 12:22:47.277318    4788 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 12:22:47.359740    4788 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 12:22:47.435010    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:22:47.512888    4788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 12:22:47.519026    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 12:22:47.523244    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:22:47.588222    4788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 12:22:47.628667    4788 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 12:22:47.628767    4788 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 12:22:47.631766    4788 start.go:563] Will wait 60s for crictl version
	I0919 12:22:47.631829    4788 ssh_runner.go:195] Run: which crictl
	I0919 12:22:47.633147    4788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 12:22:47.647799    4788 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0919 12:22:47.647881    4788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 12:22:47.664128    4788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 12:22:47.684231    4788 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0919 12:22:47.684310    4788 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0919 12:22:47.685611    4788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 12:22:47.689313    4788 kubeadm.go:883] updating cluster {Name:stopped-upgrade-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0919 12:22:47.689356    4788 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0919 12:22:47.689406    4788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 12:22:47.699677    4788 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 12:22:47.699686    4788 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0919 12:22:47.699744    4788 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 12:22:47.702730    4788 ssh_runner.go:195] Run: which lz4
	I0919 12:22:47.703979    4788 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 12:22:47.705221    4788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 12:22:47.705233    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0919 12:22:48.642081    4788 docker.go:649] duration metric: took 938.158958ms to copy over tarball
	I0919 12:22:48.642152    4788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 12:22:48.268509    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:48.268614    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:48.281339    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:48.281432    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:48.293287    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:48.293377    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:48.305210    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:48.305301    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:48.319754    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:48.319851    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:48.338091    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:48.338181    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:48.354297    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:48.354385    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:48.366180    4610 logs.go:276] 0 containers: []
	W0919 12:22:48.366191    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:48.366270    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:48.377702    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:48.377720    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:48.377726    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:48.393270    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:48.393287    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:48.409206    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:48.409222    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:48.426373    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:48.426386    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:48.450948    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:48.450966    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:48.463501    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:48.463513    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:48.478219    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:48.478231    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:48.516815    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:48.516834    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:48.557664    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:48.557677    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:48.573108    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:48.573119    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:48.588613    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:48.588626    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:48.601879    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:48.601891    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:48.614348    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:48.614365    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:48.619137    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:48.619152    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:48.632593    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:48.632606    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:48.645740    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:48.645751    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:48.665077    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:48.665091    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:51.187031    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:49.789434    4788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.147299417s)
	I0919 12:22:49.789449    4788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 12:22:49.805230    4788 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 12:22:49.808635    4788 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0919 12:22:49.813843    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:22:49.893079    4788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 12:22:51.658463    4788 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.765416041s)
	I0919 12:22:51.658580    4788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 12:22:51.669772    4788 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 12:22:51.669788    4788 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0919 12:22:51.669793    4788 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 12:22:51.674085    4788 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:22:51.675908    4788 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:22:51.678123    4788 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:22:51.678407    4788 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:22:51.680460    4788 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:22:51.680529    4788 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:22:51.682301    4788 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:22:51.682298    4788 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:22:51.683836    4788 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0919 12:22:51.683912    4788 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:22:51.685069    4788 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0919 12:22:51.686175    4788 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:22:51.686439    4788 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0919 12:22:51.686844    4788 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:22:51.687185    4788 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0919 12:22:51.688910    4788 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:22:52.077487    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:22:52.079468    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:22:52.093460    4788 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0919 12:22:52.093489    4788 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:22:52.093563    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:22:52.101516    4788 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0919 12:22:52.101539    4788 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:22:52.101621    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:22:52.107617    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0919 12:22:52.112336    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0919 12:22:52.119154    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:22:52.122667    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:22:52.126353    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0919 12:22:52.134027    4788 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0919 12:22:52.134048    4788 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0919 12:22:52.134054    4788 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:22:52.134059    4788 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:22:52.134122    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:22:52.134122    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:22:52.143681    4788 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0919 12:22:52.143705    4788 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0919 12:22:52.143773    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0919 12:22:52.150225    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0919 12:22:52.151890    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0919 12:22:52.155493    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0919 12:22:52.166401    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0919 12:22:52.166413    4788 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0919 12:22:52.166431    4788 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0919 12:22:52.166497    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0919 12:22:52.176189    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0919 12:22:52.176313    4788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0919 12:22:52.177858    4788 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0919 12:22:52.177870    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0919 12:22:52.179312    4788 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0919 12:22:52.179428    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:22:52.185615    4788 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0919 12:22:52.185627    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0919 12:22:52.195313    4788 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0919 12:22:52.195338    4788 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:22:52.195409    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:22:52.227709    4788 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0919 12:22:52.227751    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0919 12:22:52.227879    4788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0919 12:22:52.229245    4788 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0919 12:22:52.229258    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0919 12:22:52.269364    4788 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0919 12:22:52.269379    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0919 12:22:52.312630    4788 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0919 12:22:52.493286    4788 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0919 12:22:52.493481    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:22:52.507866    4788 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0919 12:22:52.507901    4788 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:22:52.507983    4788 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:22:52.523672    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0919 12:22:52.523806    4788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0919 12:22:52.525248    4788 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0919 12:22:52.525260    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0919 12:22:52.556434    4788 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0919 12:22:52.556450    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0919 12:22:52.794550    4788 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0919 12:22:52.794587    4788 cache_images.go:92] duration metric: took 1.124819709s to LoadCachedImages
	W0919 12:22:52.794628    4788 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0919 12:22:52.794634    4788 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0919 12:22:52.794689    4788 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-269000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 12:22:52.794758    4788 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 12:22:52.807781    4788 cni.go:84] Creating CNI manager for ""
	I0919 12:22:52.807793    4788 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:22:52.807805    4788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 12:22:52.807813    4788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-269000 NodeName:stopped-upgrade-269000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 12:22:52.807881    4788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-269000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 12:22:52.807955    4788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0919 12:22:52.811007    4788 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 12:22:52.811043    4788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 12:22:52.813533    4788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0919 12:22:52.818490    4788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 12:22:52.823272    4788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0919 12:22:52.828385    4788 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0919 12:22:52.829573    4788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 12:22:52.833265    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:22:52.919376    4788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 12:22:52.926397    4788 certs.go:68] Setting up /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000 for IP: 10.0.2.15
	I0919 12:22:52.926407    4788 certs.go:194] generating shared ca certs ...
	I0919 12:22:52.926416    4788 certs.go:226] acquiring lock for ca certs: {Name:mk207a98b59455406f5fa19947ac5c81f6753b77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:22:52.926565    4788 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.key
	I0919 12:22:52.926603    4788 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.key
	I0919 12:22:52.926613    4788 certs.go:256] generating profile certs ...
	I0919 12:22:52.926673    4788 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/client.key
	I0919 12:22:52.926696    4788 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.key.d4ae76be
	I0919 12:22:52.926709    4788 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.crt.d4ae76be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0919 12:22:53.064991    4788 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.crt.d4ae76be ...
	I0919 12:22:53.065008    4788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.crt.d4ae76be: {Name:mk4ebd5ae5db10b2597167055ceae25473bd7724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:22:53.065963    4788 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.key.d4ae76be ...
	I0919 12:22:53.065970    4788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.key.d4ae76be: {Name:mkf32725161b788bb445ec4c580490c2d7786db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:22:53.066137    4788 certs.go:381] copying /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.crt.d4ae76be -> /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.crt
	I0919 12:22:53.066293    4788 certs.go:385] copying /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.key.d4ae76be -> /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.key
	I0919 12:22:53.066433    4788 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/proxy-client.key
	I0919 12:22:53.066566    4788 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/1618.pem (1338 bytes)
	W0919 12:22:53.066588    4788 certs.go:480] ignoring /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/1618_empty.pem, impossibly tiny 0 bytes
	I0919 12:22:53.066593    4788 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 12:22:53.066612    4788 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem (1078 bytes)
	I0919 12:22:53.066630    4788 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem (1123 bytes)
	I0919 12:22:53.066648    4788 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/key.pem (1679 bytes)
	I0919 12:22:53.066685    4788 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/ssl/certs/16182.pem (1708 bytes)
	I0919 12:22:53.067008    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 12:22:53.073834    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 12:22:53.080934    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 12:22:53.088221    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 12:22:53.095955    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 12:22:53.103146    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 12:22:53.109966    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 12:22:53.116794    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 12:22:53.124137    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/1618.pem --> /usr/share/ca-certificates/1618.pem (1338 bytes)
	I0919 12:22:53.131268    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/ssl/certs/16182.pem --> /usr/share/ca-certificates/16182.pem (1708 bytes)
	I0919 12:22:53.137647    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 12:22:53.144512    4788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 12:22:53.150829    4788 ssh_runner.go:195] Run: openssl version
	I0919 12:22:53.152615    4788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 12:22:53.155574    4788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 12:22:53.156917    4788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0919 12:22:53.156940    4788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 12:22:53.158779    4788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 12:22:53.161585    4788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1618.pem && ln -fs /usr/share/ca-certificates/1618.pem /etc/ssl/certs/1618.pem"
	I0919 12:22:53.164996    4788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1618.pem
	I0919 12:22:53.166391    4788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 18:54 /usr/share/ca-certificates/1618.pem
	I0919 12:22:53.166413    4788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1618.pem
	I0919 12:22:53.168068    4788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1618.pem /etc/ssl/certs/51391683.0"
	I0919 12:22:53.171133    4788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16182.pem && ln -fs /usr/share/ca-certificates/16182.pem /etc/ssl/certs/16182.pem"
	I0919 12:22:53.174027    4788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16182.pem
	I0919 12:22:53.175328    4788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 18:54 /usr/share/ca-certificates/16182.pem
	I0919 12:22:53.175347    4788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16182.pem
	I0919 12:22:53.177021    4788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16182.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 12:22:53.180536    4788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 12:22:53.181853    4788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 12:22:53.183654    4788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 12:22:53.185635    4788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 12:22:53.187447    4788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 12:22:53.189179    4788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 12:22:53.190989    4788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 12:22:53.192850    4788 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0919 12:22:53.192927    4788 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 12:22:53.204642    4788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 12:22:53.208171    4788 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 12:22:53.208185    4788 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0919 12:22:53.208215    4788 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 12:22:53.210929    4788 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 12:22:53.211238    4788 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-269000" does not appear in /Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:22:53.211338    4788 kubeconfig.go:62] /Users/jenkins/minikube-integration/19664-1099/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-269000" cluster setting kubeconfig missing "stopped-upgrade-269000" context setting]
	I0919 12:22:53.211539    4788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/kubeconfig: {Name:mk8a8f1f5779f30829ec51973ad05815f1640da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:22:53.212260    4788 kapi.go:59] client config for stopped-upgrade-269000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/client.key", CAFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104009800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 12:22:53.212598    4788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 12:22:53.215320    4788 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-269000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0919 12:22:53.215326    4788 kubeadm.go:1160] stopping kube-system containers ...
	I0919 12:22:53.215379    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 12:22:53.227514    4788 docker.go:483] Stopping containers: [6e24dc0306c2 219994403f67 a04ca8cc8c56 9ceebd9f5b94 69919762f36d c50b0db508a6 3d2544e1d664 7a4823763f68]
	I0919 12:22:53.227605    4788 ssh_runner.go:195] Run: docker stop 6e24dc0306c2 219994403f67 a04ca8cc8c56 9ceebd9f5b94 69919762f36d c50b0db508a6 3d2544e1d664 7a4823763f68
	I0919 12:22:53.238347    4788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 12:22:53.243931    4788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 12:22:53.246718    4788 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 12:22:53.246725    4788 kubeadm.go:157] found existing configuration files:
	
	I0919 12:22:53.246751    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf
	I0919 12:22:53.249247    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 12:22:53.249274    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 12:22:53.252257    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf
	I0919 12:22:53.254871    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 12:22:53.254894    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 12:22:53.257392    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf
	I0919 12:22:53.260346    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 12:22:53.260371    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 12:22:53.263202    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf
	I0919 12:22:53.265629    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 12:22:53.265653    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 12:22:53.268703    4788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 12:22:53.271887    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:22:53.296722    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:22:53.870759    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:22:54.000957    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:22:54.029186    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:22:54.053773    4788 api_server.go:52] waiting for apiserver process to appear ...
	I0919 12:22:54.053857    4788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:22:54.555276    4788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:22:56.189159    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:22:56.189643    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:22:56.227892    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:22:56.228034    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:22:56.246239    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:22:56.246360    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:22:56.259937    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:22:56.260032    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:22:56.274336    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:22:56.274422    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:22:56.284773    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:22:56.284853    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:22:56.295651    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:22:56.295731    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:22:56.306339    4610 logs.go:276] 0 containers: []
	W0919 12:22:56.306351    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:22:56.306420    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:22:56.317872    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:22:56.317890    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:22:56.317895    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:22:56.333231    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:22:56.333242    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:22:56.347971    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:22:56.347983    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:22:56.366321    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:22:56.366333    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:22:56.380057    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:22:56.380072    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:22:56.393981    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:22:56.393992    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:22:56.416193    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:22:56.416200    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:22:56.450392    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:22:56.450401    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:22:56.454672    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:22:56.454678    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:22:56.468131    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:22:56.468141    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:22:56.485645    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:22:56.485655    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:22:56.499085    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:22:56.499098    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:22:56.537208    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:22:56.537219    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:22:56.556336    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:22:56.556353    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:22:56.571651    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:22:56.571665    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:22:56.588900    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:22:56.588911    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:22:56.600755    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:22:56.600768    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:22:55.055876    4788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:22:55.060046    4788 api_server.go:72] duration metric: took 1.006303125s to wait for apiserver process to appear ...
	I0919 12:22:55.060055    4788 api_server.go:88] waiting for apiserver healthz status ...
	I0919 12:22:55.060065    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:22:59.119030    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:00.062047    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:00.062104    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:04.121264    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:04.121662    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:23:04.149481    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:23:04.149623    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:23:04.169157    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:23:04.169266    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:23:04.183572    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:23:04.183675    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:23:04.195295    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:23:04.195383    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:23:04.207235    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:23:04.207317    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:23:04.218773    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:23:04.218867    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:23:04.228980    4610 logs.go:276] 0 containers: []
	W0919 12:23:04.228994    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:23:04.229062    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:23:04.239578    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:23:04.239594    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:23:04.239599    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:23:04.251490    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:23:04.251502    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:23:04.263633    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:23:04.263648    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:23:04.268256    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:23:04.268263    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:23:04.280478    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:23:04.280488    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:23:04.296382    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:23:04.296395    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:23:04.308435    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:23:04.308447    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:23:04.330479    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:23:04.330488    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:23:04.365178    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:23:04.365193    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:23:04.377325    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:23:04.377336    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:23:04.401508    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:23:04.401523    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:23:04.416526    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:23:04.416538    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:23:04.452170    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:23:04.452178    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:23:04.468463    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:23:04.468474    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:23:04.488007    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:23:04.488022    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:23:04.504602    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:23:04.504612    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:23:04.518943    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:23:04.518953    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:23:07.032407    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:05.062256    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:05.062326    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:12.034071    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:12.034181    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:23:12.045295    4610 logs.go:276] 2 containers: [4e4e4a383f70 3652994714e2]
	I0919 12:23:12.045380    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:23:12.057309    4610 logs.go:276] 2 containers: [da27d8fa2473 103fc45092f8]
	I0919 12:23:12.057398    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:23:12.068129    4610 logs.go:276] 1 containers: [02ffade1b5ef]
	I0919 12:23:12.068212    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:23:12.079446    4610 logs.go:276] 2 containers: [c04e4293f6a7 e2b28bfdabb8]
	I0919 12:23:12.079530    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:23:12.090971    4610 logs.go:276] 1 containers: [7f8247dc1b75]
	I0919 12:23:12.091047    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:23:12.102997    4610 logs.go:276] 2 containers: [6b66f8d8b0a5 32dca4ac5ee1]
	I0919 12:23:12.103082    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:23:12.114863    4610 logs.go:276] 0 containers: []
	W0919 12:23:12.114876    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:23:12.114947    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:23:12.126339    4610 logs.go:276] 2 containers: [467ec8178011 3b91fc4d40a5]
	I0919 12:23:12.126361    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:23:12.126367    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:23:12.131186    4610 logs.go:123] Gathering logs for kube-apiserver [3652994714e2] ...
	I0919 12:23:12.131198    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652994714e2"
	I0919 12:23:12.151191    4610 logs.go:123] Gathering logs for kube-proxy [7f8247dc1b75] ...
	I0919 12:23:12.151206    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f8247dc1b75"
	I0919 12:23:12.164574    4610 logs.go:123] Gathering logs for kube-controller-manager [6b66f8d8b0a5] ...
	I0919 12:23:12.164587    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b66f8d8b0a5"
	I0919 12:23:12.182859    4610 logs.go:123] Gathering logs for storage-provisioner [3b91fc4d40a5] ...
	I0919 12:23:12.182873    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b91fc4d40a5"
	I0919 12:23:12.195707    4610 logs.go:123] Gathering logs for kube-apiserver [4e4e4a383f70] ...
	I0919 12:23:12.195720    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e4e4a383f70"
	I0919 12:23:12.212994    4610 logs.go:123] Gathering logs for etcd [da27d8fa2473] ...
	I0919 12:23:12.213010    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da27d8fa2473"
	I0919 12:23:12.229528    4610 logs.go:123] Gathering logs for etcd [103fc45092f8] ...
	I0919 12:23:12.229542    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 103fc45092f8"
	I0919 12:23:12.250766    4610 logs.go:123] Gathering logs for coredns [02ffade1b5ef] ...
	I0919 12:23:12.250777    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ffade1b5ef"
	I0919 12:23:12.273295    4610 logs.go:123] Gathering logs for kube-scheduler [e2b28bfdabb8] ...
	I0919 12:23:12.273308    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2b28bfdabb8"
	I0919 12:23:12.297065    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:23:12.297082    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:23:12.310562    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:23:12.310576    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:23:12.348292    4610 logs.go:123] Gathering logs for kube-scheduler [c04e4293f6a7] ...
	I0919 12:23:12.348311    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c04e4293f6a7"
	I0919 12:23:12.360307    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:23:12.360320    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:23:12.384568    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:23:12.384581    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:23:12.418833    4610 logs.go:123] Gathering logs for kube-controller-manager [32dca4ac5ee1] ...
	I0919 12:23:12.418845    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dca4ac5ee1"
	I0919 12:23:12.431783    4610 logs.go:123] Gathering logs for storage-provisioner [467ec8178011] ...
	I0919 12:23:12.431794    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ec8178011"
	I0919 12:23:10.062741    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:10.062786    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:14.956732    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:15.063354    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:15.063413    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:19.959318    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:19.959481    4610 kubeadm.go:597] duration metric: took 4m4.313125208s to restartPrimaryControlPlane
	W0919 12:23:19.959607    4610 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0919 12:23:19.959650    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0919 12:23:20.991296    4610 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.031659875s)
	I0919 12:23:20.991366    4610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 12:23:20.996422    4610 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 12:23:20.999376    4610 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 12:23:21.002012    4610 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 12:23:21.002018    4610 kubeadm.go:157] found existing configuration files:
	
	I0919 12:23:21.002046    4610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/admin.conf
	I0919 12:23:21.004421    4610 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 12:23:21.004447    4610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 12:23:21.007064    4610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/kubelet.conf
	I0919 12:23:21.009632    4610 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 12:23:21.009664    4610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 12:23:21.012263    4610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/controller-manager.conf
	I0919 12:23:21.015220    4610 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 12:23:21.015246    4610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 12:23:21.018530    4610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/scheduler.conf
	I0919 12:23:21.021038    4610 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 12:23:21.021062    4610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 12:23:21.023690    4610 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 12:23:21.043008    4610 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0919 12:23:21.043046    4610 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 12:23:21.092691    4610 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 12:23:21.092756    4610 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 12:23:21.092815    4610 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 12:23:21.145074    4610 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 12:23:21.149214    4610 out.go:235]   - Generating certificates and keys ...
	I0919 12:23:21.149247    4610 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 12:23:21.149278    4610 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 12:23:21.149316    4610 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 12:23:21.149346    4610 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0919 12:23:21.149379    4610 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 12:23:21.149410    4610 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0919 12:23:21.149467    4610 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0919 12:23:21.149607    4610 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0919 12:23:21.149644    4610 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 12:23:21.149679    4610 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 12:23:21.149698    4610 kubeadm.go:310] [certs] Using the existing "sa" key
	I0919 12:23:21.149728    4610 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 12:23:21.337170    4610 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 12:23:21.419296    4610 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 12:23:21.543500    4610 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 12:23:21.836183    4610 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 12:23:21.863841    4610 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 12:23:21.864277    4610 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 12:23:21.864368    4610 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 12:23:21.941299    4610 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 12:23:21.945523    4610 out.go:235]   - Booting up control plane ...
	I0919 12:23:21.945573    4610 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 12:23:21.945633    4610 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 12:23:21.945669    4610 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 12:23:21.945715    4610 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 12:23:21.945803    4610 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 12:23:20.064176    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:20.064204    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:26.452232    4610 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.507557 seconds
	I0919 12:23:26.452324    4610 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 12:23:26.457957    4610 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 12:23:26.973794    4610 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 12:23:26.973958    4610 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-356000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 12:23:27.474409    4610 kubeadm.go:310] [bootstrap-token] Using token: p5d3qd.ufb657tusl8cqnx2
	I0919 12:23:27.478152    4610 out.go:235]   - Configuring RBAC rules ...
	I0919 12:23:27.478211    4610 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 12:23:27.478253    4610 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 12:23:27.481927    4610 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 12:23:27.482833    4610 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 12:23:27.483649    4610 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 12:23:27.484376    4610 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 12:23:27.487756    4610 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 12:23:27.662664    4610 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 12:23:27.878103    4610 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 12:23:27.878469    4610 kubeadm.go:310] 
	I0919 12:23:27.878501    4610 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 12:23:27.878505    4610 kubeadm.go:310] 
	I0919 12:23:27.878548    4610 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 12:23:27.878553    4610 kubeadm.go:310] 
	I0919 12:23:27.878571    4610 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 12:23:27.878606    4610 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 12:23:27.878635    4610 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 12:23:27.878639    4610 kubeadm.go:310] 
	I0919 12:23:27.878672    4610 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 12:23:27.878677    4610 kubeadm.go:310] 
	I0919 12:23:27.878718    4610 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 12:23:27.878724    4610 kubeadm.go:310] 
	I0919 12:23:27.878757    4610 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 12:23:27.878808    4610 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 12:23:27.878851    4610 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 12:23:27.878854    4610 kubeadm.go:310] 
	I0919 12:23:27.878907    4610 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 12:23:27.878951    4610 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 12:23:27.878955    4610 kubeadm.go:310] 
	I0919 12:23:27.879000    4610 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p5d3qd.ufb657tusl8cqnx2 \
	I0919 12:23:27.879063    4610 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e0c2857de0258e65a9bba263f6157106d84e898a6b55abbe378b8f48b6c815 \
	I0919 12:23:27.879074    4610 kubeadm.go:310] 	--control-plane 
	I0919 12:23:27.879077    4610 kubeadm.go:310] 
	I0919 12:23:27.879130    4610 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 12:23:27.879135    4610 kubeadm.go:310] 
	I0919 12:23:27.879197    4610 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p5d3qd.ufb657tusl8cqnx2 \
	I0919 12:23:27.879246    4610 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e0c2857de0258e65a9bba263f6157106d84e898a6b55abbe378b8f48b6c815 
	I0919 12:23:27.879337    4610 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 12:23:27.879346    4610 cni.go:84] Creating CNI manager for ""
	I0919 12:23:27.879356    4610 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:23:27.885735    4610 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 12:23:27.888842    4610 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 12:23:27.892407    4610 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 12:23:27.897662    4610 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 12:23:27.897721    4610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 12:23:27.897774    4610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-356000 minikube.k8s.io/updated_at=2024_09_19T12_23_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=running-upgrade-356000 minikube.k8s.io/primary=true
	I0919 12:23:27.942459    4610 ops.go:34] apiserver oom_adj: -16
	I0919 12:23:27.942475    4610 kubeadm.go:1113] duration metric: took 44.809ms to wait for elevateKubeSystemPrivileges
	I0919 12:23:27.942487    4610 kubeadm.go:394] duration metric: took 4m12.310354709s to StartCluster
	I0919 12:23:27.942496    4610 settings.go:142] acquiring lock: {Name:mk40c96dc3647741b89517369d27068bccfc0e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:23:27.942600    4610 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:23:27.942995    4610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/kubeconfig: {Name:mk8a8f1f5779f30829ec51973ad05815f1640da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:23:27.943228    4610 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:23:27.943249    4610 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 12:23:27.943306    4610 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-356000"
	I0919 12:23:27.943313    4610 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-356000"
	W0919 12:23:27.943316    4610 addons.go:243] addon storage-provisioner should already be in state true
	I0919 12:23:27.943327    4610 host.go:66] Checking if "running-upgrade-356000" exists ...
	I0919 12:23:27.943329    4610 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-356000"
	I0919 12:23:27.943363    4610 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-356000"
	I0919 12:23:27.943340    4610 config.go:182] Loaded profile config "running-upgrade-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:23:27.943591    4610 retry.go:31] will retry after 690.213829ms: connect: dial unix /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/running-upgrade-356000/monitor: connect: connection refused
	I0919 12:23:27.944313    4610 kapi.go:59] client config for running-upgrade-356000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/running-upgrade-356000/client.key", CAFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1025a5800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 12:23:27.944434    4610 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-356000"
	W0919 12:23:27.944439    4610 addons.go:243] addon default-storageclass should already be in state true
	I0919 12:23:27.944447    4610 host.go:66] Checking if "running-upgrade-356000" exists ...
	I0919 12:23:27.944966    4610 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 12:23:27.944972    4610 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 12:23:27.944977    4610 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/running-upgrade-356000/id_rsa Username:docker}
	I0919 12:23:27.946829    4610 out.go:177] * Verifying Kubernetes components...
	I0919 12:23:27.953769    4610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:23:28.035740    4610 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 12:23:28.041291    4610 api_server.go:52] waiting for apiserver process to appear ...
	I0919 12:23:28.041346    4610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:23:28.045424    4610 api_server.go:72] duration metric: took 102.188208ms to wait for apiserver process to appear ...
	I0919 12:23:28.045432    4610 api_server.go:88] waiting for apiserver healthz status ...
	I0919 12:23:28.045439    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:28.098633    4610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 12:23:28.396943    4610 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 12:23:28.396957    4610 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 12:23:28.638899    4610 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:23:25.065202    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:25.065222    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:28.643198    4610 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 12:23:28.643206    4610 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 12:23:28.643214    4610 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/running-upgrade-356000/id_rsa Username:docker}
	I0919 12:23:28.682350    4610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 12:23:30.066477    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:30.066521    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:33.047119    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:33.047168    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:35.068184    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:35.068280    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:38.047383    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:38.047424    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:40.070710    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:40.070745    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:43.047691    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:43.047715    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:45.072837    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:45.072863    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:48.047945    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:48.047984    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:50.073926    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:50.074022    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:53.048361    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:53.048393    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:58.048849    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:58.048886    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0919 12:23:58.398387    4610 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0919 12:23:58.402661    4610 out.go:177] * Enabled addons: storage-provisioner
	I0919 12:23:55.076099    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:55.076275    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:23:55.087367    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:23:55.087465    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:23:55.098077    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:23:55.098169    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:23:55.108471    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:23:55.108546    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:23:55.119975    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:23:55.120073    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:23:55.136110    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:23:55.136190    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:23:55.146976    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:23:55.147065    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:23:55.157181    4788 logs.go:276] 0 containers: []
	W0919 12:23:55.157191    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:23:55.157255    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:23:55.167739    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:23:55.167757    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:23:55.167763    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:23:55.208760    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:23:55.208772    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:23:55.220478    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:23:55.220491    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:23:55.232992    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:23:55.233002    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:23:55.245223    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:23:55.245236    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:23:55.271758    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:23:55.271770    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:23:55.356684    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:23:55.356696    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:23:55.371979    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:23:55.371991    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:23:55.390220    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:23:55.390232    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:23:55.402237    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:23:55.402249    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:23:55.414789    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:23:55.414799    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:23:55.419375    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:23:55.419382    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:23:55.433101    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:23:55.433113    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:23:55.444716    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:23:55.444726    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:23:55.458156    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:23:55.458166    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:23:55.471926    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:23:55.471936    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:23:55.488044    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:23:55.488055    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:23:58.028106    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:58.410419    4610 addons.go:510] duration metric: took 30.46801025s for enable addons: enabled=[storage-provisioner]
	I0919 12:24:03.027300    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:03.027880    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:03.065905    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:03.066053    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:03.086711    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:03.086810    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:03.099430    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:03.099527    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:03.110868    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:03.110964    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:03.121726    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:03.121808    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:03.132372    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:03.132455    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:03.143743    4788 logs.go:276] 0 containers: []
	W0919 12:24:03.143754    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:03.143822    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:03.154720    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:03.154736    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:03.154742    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:03.159098    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:03.159104    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:03.173365    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:03.173378    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:03.192914    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:03.192926    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:03.218560    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:03.218570    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:03.256811    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:03.256825    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:03.270885    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:03.270910    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:03.293158    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:03.293167    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:03.304034    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:03.304045    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:03.315487    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:03.315499    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:03.326930    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:03.326940    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:03.364428    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:03.364440    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:03.384952    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:03.384961    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:03.397032    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:03.397041    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:03.414574    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:03.414586    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:03.428579    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:03.428589    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:03.439824    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:03.439835    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:03.046346    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:03.046396    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:05.978408    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:08.043737    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:08.043773    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:10.976754    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:10.976928    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:10.994294    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:10.994399    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:11.008107    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:11.008199    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:11.020874    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:11.020956    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:11.033529    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:11.033616    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:11.044451    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:11.044532    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:11.055633    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:11.055725    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:11.065894    4788 logs.go:276] 0 containers: []
	W0919 12:24:11.065907    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:11.065986    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:11.080410    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:11.080429    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:11.080435    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:11.085130    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:11.085137    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:11.104526    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:11.104536    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:11.115885    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:11.115897    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:11.140528    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:11.140536    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:11.178498    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:11.178508    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:11.227043    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:11.227057    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:11.239824    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:11.239835    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:11.256728    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:11.256741    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:11.268970    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:11.268981    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:11.285097    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:11.285109    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:11.303995    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:11.304004    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:11.316140    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:11.316151    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:11.354543    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:11.354555    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:11.368906    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:11.368917    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:11.383385    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:11.383399    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:11.394730    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:11.394739    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:13.910392    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:13.042444    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:13.042488    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:18.910970    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:18.911235    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:18.935131    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:18.935259    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:18.951297    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:18.951394    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:18.963471    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:18.963562    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:18.975137    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:18.975211    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:18.985972    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:18.986059    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:18.996523    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:18.996613    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:19.007486    4788 logs.go:276] 0 containers: []
	W0919 12:24:19.007500    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:19.007575    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:19.018191    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:19.018209    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:19.018215    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:19.022507    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:19.022514    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:19.036741    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:19.036753    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:19.052041    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:19.052051    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:19.078341    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:19.078356    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:19.094940    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:19.094952    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:19.131880    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:19.131891    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:19.172017    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:19.172028    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:19.184393    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:19.184405    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:19.201953    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:19.201965    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:19.215895    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:19.215907    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:19.230008    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:19.230019    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:19.240829    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:19.240840    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:19.253807    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:19.253817    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:19.291122    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:19.291136    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:19.305304    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:19.305315    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:19.316239    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:19.316250    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:18.042250    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:18.042309    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:21.830451    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:23.043142    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:23.043193    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:26.831476    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:26.831729    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:26.857524    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:26.857683    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:26.874936    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:26.875042    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:26.888122    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:26.888223    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:26.899849    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:26.899933    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:26.909834    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:26.909918    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:26.924732    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:26.924820    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:26.934961    4788 logs.go:276] 0 containers: []
	W0919 12:24:26.934979    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:26.935057    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:26.945657    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:26.945673    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:26.945678    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:26.985230    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:26.985242    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:27.028375    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:27.028386    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:27.042757    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:27.042770    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:27.057135    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:27.057145    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:27.072076    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:27.072088    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:27.085703    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:27.085716    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:27.090358    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:27.090368    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:27.101736    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:27.101748    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:27.113772    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:27.113785    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:27.131857    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:27.131867    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:27.144219    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:27.144231    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:27.183882    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:27.183895    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:27.195721    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:27.195733    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:27.207844    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:27.207855    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:27.221883    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:27.221897    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:27.233991    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:27.234003    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:28.044430    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:28.044653    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:28.061154    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:24:28.061256    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:28.081802    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:24:28.081896    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:28.103541    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:24:28.103620    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:28.116647    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:24:28.116734    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:28.127147    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:24:28.127239    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:28.138154    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:24:28.138232    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:28.149266    4610 logs.go:276] 0 containers: []
	W0919 12:24:28.149278    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:28.149350    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:28.159718    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:24:28.159735    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:28.159740    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:28.193549    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:24:28.193557    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:24:28.207506    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:24:28.207516    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:24:28.219731    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:24:28.219740    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:24:28.234529    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:24:28.234544    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:24:28.253386    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:28.253400    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:28.278730    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:24:28.278738    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:28.290206    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:28.290217    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:28.295344    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:28.295350    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:28.331650    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:24:28.331660    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:24:28.345822    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:24:28.345832    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:24:28.357688    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:24:28.357697    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:24:28.369457    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:24:28.369469    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:24:30.881390    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:29.760325    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:35.882956    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:35.883169    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:35.898822    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:24:35.898918    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:35.911135    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:24:35.911226    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:35.921925    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:24:35.922002    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:35.933393    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:24:35.933481    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:35.943651    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:24:35.943726    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:35.953658    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:24:35.953742    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:35.963909    4610 logs.go:276] 0 containers: []
	W0919 12:24:35.963919    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:35.963988    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:35.974619    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:24:35.974635    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:24:35.974640    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:35.987162    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:35.987172    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:35.991940    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:24:35.991946    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:24:36.003647    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:24:36.003657    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:24:36.018746    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:24:36.018758    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:24:36.030353    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:24:36.030365    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:24:36.047686    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:36.047697    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:36.072407    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:36.072417    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:36.107702    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:36.107713    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:36.142757    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:24:36.142768    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:24:36.156546    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:24:36.156560    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:24:36.171243    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:24:36.171253    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:24:36.183275    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:24:36.183287    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:24:34.761843    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:34.762011    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:34.773181    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:34.773260    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:34.784044    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:34.784135    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:34.794649    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:34.794730    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:34.806609    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:34.806697    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:34.816931    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:34.817016    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:34.827688    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:34.827775    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:34.842464    4788 logs.go:276] 0 containers: []
	W0919 12:24:34.842477    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:34.842557    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:34.852973    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:34.853014    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:34.853020    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:34.857171    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:34.857177    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:34.871084    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:34.871100    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:34.885408    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:34.885422    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:34.897083    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:34.897097    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:34.908032    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:34.908043    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:34.926146    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:34.926163    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:34.937495    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:34.937505    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:34.948985    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:34.949000    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:34.966108    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:34.966118    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:34.980946    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:34.980956    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:34.992368    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:34.992384    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:35.003930    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:35.003940    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:35.043144    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:35.043153    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:35.086919    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:35.086930    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:35.125139    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:35.125150    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:35.141737    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:35.141747    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:37.667510    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:38.697156    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:42.669648    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:42.669988    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:42.696749    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:42.696900    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:42.713849    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:42.713953    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:42.727812    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:42.727909    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:42.739433    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:42.739522    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:42.753983    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:42.754064    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:42.764220    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:42.764306    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:42.774575    4788 logs.go:276] 0 containers: []
	W0919 12:24:42.774588    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:42.774656    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:42.785098    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:42.785115    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:42.785120    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:42.800285    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:42.800295    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:42.812351    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:42.812361    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:42.850460    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:42.850473    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:42.862557    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:42.862570    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:42.900136    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:42.900146    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:42.936949    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:42.936963    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:42.954280    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:42.954291    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:42.968184    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:42.968194    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:42.980236    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:42.980246    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:42.984161    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:42.984170    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:42.998167    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:42.998177    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:43.010061    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:43.010071    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:43.032078    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:43.032087    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:43.043361    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:43.043372    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:43.067084    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:43.067093    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:43.080904    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:43.080915    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:43.698941    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:43.699164    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:43.724075    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:24:43.724195    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:43.740927    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:24:43.741032    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:43.755996    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:24:43.756096    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:43.769896    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:24:43.769971    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:43.780285    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:24:43.780357    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:43.790505    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:24:43.790580    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:43.801017    4610 logs.go:276] 0 containers: []
	W0919 12:24:43.801029    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:43.801101    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:43.811246    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:24:43.811261    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:43.811267    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:43.845762    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:43.845774    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:43.850052    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:24:43.850058    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:24:43.863808    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:24:43.863820    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:24:43.877873    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:24:43.877883    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:24:43.889305    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:24:43.889315    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:24:43.900579    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:24:43.900588    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:24:43.915149    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:24:43.915157    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:24:43.927199    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:24:43.927209    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:24:43.945235    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:24:43.945245    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:43.957529    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:43.957541    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:43.991962    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:24:43.991973    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:24:44.003381    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:44.003394    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:46.528702    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:45.593993    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:51.529307    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:51.529559    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:51.547165    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:24:51.547261    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:51.560376    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:24:51.560470    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:51.572157    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:24:51.572230    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:51.583029    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:24:51.583097    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:51.593530    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:24:51.593614    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:51.604021    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:24:51.604111    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:51.614580    4610 logs.go:276] 0 containers: []
	W0919 12:24:51.614592    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:51.614665    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:51.628902    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:24:51.628916    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:24:51.628922    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:24:51.640493    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:51.640503    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:51.645312    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:51.645322    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:51.682188    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:24:51.682198    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:24:51.696209    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:24:51.696221    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:24:51.710554    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:24:51.710566    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:24:51.725384    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:24:51.725394    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:24:51.737386    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:24:51.737398    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:24:51.754231    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:51.754246    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:51.779380    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:51.779394    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:51.814864    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:24:51.814874    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:24:51.831512    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:24:51.831528    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:24:51.849125    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:24:51.849136    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:50.595892    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:50.596182    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:50.618137    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:50.618261    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:50.639141    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:50.639238    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:50.651311    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:50.651389    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:50.661398    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:50.661485    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:50.672066    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:50.672159    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:50.682655    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:50.682729    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:50.694494    4788 logs.go:276] 0 containers: []
	W0919 12:24:50.694514    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:50.694590    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:50.705609    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:50.705628    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:50.705636    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:50.723391    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:50.723400    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:50.734455    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:50.734467    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:50.749906    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:50.749919    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:50.763502    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:50.763511    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:50.800604    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:50.800615    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:50.804698    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:50.804704    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:50.816080    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:50.816090    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:50.833297    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:50.833308    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:50.844744    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:50.844754    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:50.883212    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:50.883220    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:50.894588    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:50.894600    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:50.920471    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:50.920478    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:50.932248    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:50.932263    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:50.946257    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:50.946267    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:50.960124    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:50.960134    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:50.972077    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:50.972086    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:53.508259    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:54.362150    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:58.510254    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:58.510455    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:58.522071    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:58.522165    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:58.533374    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:58.533475    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:58.543944    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:58.544030    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:58.554659    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:58.554748    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:58.566023    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:58.566107    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:58.576771    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:58.576846    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:58.587426    4788 logs.go:276] 0 containers: []
	W0919 12:24:58.587438    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:58.587516    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:58.600042    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:58.600064    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:58.600069    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:58.611567    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:58.611578    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:58.630759    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:58.630775    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:58.641929    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:58.641940    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:58.683426    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:58.683436    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:58.698440    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:58.698452    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:58.717626    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:58.717635    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:58.729750    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:58.729764    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:58.741386    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:58.741396    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:58.753923    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:58.753939    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:58.804458    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:58.804468    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:58.820805    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:58.820819    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:58.835200    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:58.835214    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:58.859645    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:58.859653    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:58.898097    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:58.898105    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:58.902539    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:58.902545    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:58.917245    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:58.917257    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:59.364173    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:59.364336    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:59.377024    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:24:59.377113    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:59.387846    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:24:59.387940    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:59.398745    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:24:59.398834    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:59.409431    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:24:59.409520    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:59.419695    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:24:59.419784    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:59.431322    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:24:59.431402    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:59.441851    4610 logs.go:276] 0 containers: []
	W0919 12:24:59.441862    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:59.441931    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:59.452074    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:24:59.452090    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:59.452096    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:59.486971    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:24:59.486979    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:24:59.503900    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:24:59.503910    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:24:59.515096    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:24:59.515107    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:24:59.530244    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:24:59.530258    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:24:59.542109    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:24:59.542123    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:24:59.559216    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:59.559226    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:59.585876    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:24:59.585889    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:59.596933    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:59.596948    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:59.601552    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:59.601561    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:59.639402    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:24:59.639415    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:24:59.653565    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:24:59.653578    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:24:59.665159    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:24:59.665169    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:02.179072    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:01.429302    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:07.181454    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:07.181772    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:07.213116    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:07.213259    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:07.230793    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:07.230903    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:07.243889    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:25:07.243978    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:07.254960    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:07.255047    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:07.265619    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:07.265704    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:07.275952    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:07.276037    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:07.286526    4610 logs.go:276] 0 containers: []
	W0919 12:25:07.286536    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:07.286609    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:07.297369    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:07.297384    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:07.297389    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:07.310178    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:07.310188    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:07.322019    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:07.322030    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:07.336397    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:07.336407    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:07.349115    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:07.349125    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:07.373359    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:07.373372    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:07.408521    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:07.408532    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:07.444210    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:07.444222    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:07.458715    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:07.458725    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:07.471332    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:07.471341    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:07.495911    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:07.495918    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:07.506938    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:07.506948    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:07.511961    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:07.511968    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:06.431385    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:06.431592    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:06.445129    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:06.445228    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:06.456219    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:06.456310    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:06.466596    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:06.466685    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:06.477264    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:06.477351    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:06.487903    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:06.487985    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:06.498735    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:06.498812    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:06.509136    4788 logs.go:276] 0 containers: []
	W0919 12:25:06.509146    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:06.509220    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:06.519210    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:06.519227    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:06.519233    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:06.531174    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:06.531184    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:06.542177    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:06.542189    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:06.546656    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:06.546667    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:06.560450    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:06.560460    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:06.598339    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:06.598350    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:06.613285    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:06.613294    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:06.625189    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:06.625200    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:06.642470    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:06.642480    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:06.681595    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:06.681603    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:06.715876    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:06.715887    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:06.732275    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:06.732284    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:06.743714    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:06.743723    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:06.768287    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:06.768295    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:06.779928    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:06.779939    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:06.795035    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:06.795046    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:06.810249    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:06.810259    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:09.323985    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:10.028341    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:14.326209    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:14.326598    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:14.354862    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:14.355017    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:14.373679    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:14.373798    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:14.393367    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:14.393451    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:14.405049    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:14.405137    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:14.416422    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:14.416509    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:14.427218    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:14.427302    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:14.439933    4788 logs.go:276] 0 containers: []
	W0919 12:25:14.439950    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:14.440022    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:14.450966    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:14.450988    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:14.450994    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:14.489724    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:14.489740    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:14.503867    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:14.503881    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:14.522755    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:14.522769    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:14.534365    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:14.534375    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:15.028780    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:15.029059    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:15.049741    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:15.049856    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:15.064599    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:15.064692    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:15.077101    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:25:15.077189    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:15.088445    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:15.088530    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:15.099151    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:15.099234    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:15.109670    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:15.109756    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:15.119485    4610 logs.go:276] 0 containers: []
	W0919 12:25:15.119496    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:15.119559    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:15.129777    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:15.129793    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:15.129799    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:15.134744    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:15.134752    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:15.148775    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:15.148786    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:15.163376    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:15.163387    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:15.175036    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:15.175046    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:15.189927    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:15.189937    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:15.202192    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:15.202202    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:15.217455    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:15.217468    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:15.251661    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:15.251676    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:15.266051    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:15.266062    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:15.283426    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:15.283437    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:15.307657    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:15.307668    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:15.319559    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:15.319572    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:14.557442    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:14.557450    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:14.594542    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:14.594551    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:14.628843    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:14.628858    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:14.641445    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:14.641457    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:14.656918    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:14.656934    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:14.671641    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:14.671653    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:14.683945    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:14.683957    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:14.702623    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:14.702634    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:14.714336    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:14.714347    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:14.728131    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:14.728142    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:14.732636    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:14.732643    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:14.744194    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:14.744206    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:17.256062    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:17.857003    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:22.258310    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:22.258537    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:22.276549    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:22.276663    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:22.289761    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:22.289854    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:22.301201    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:22.301293    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:22.311874    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:22.311965    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:22.322312    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:22.322396    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:22.333620    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:22.333698    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:22.345268    4788 logs.go:276] 0 containers: []
	W0919 12:25:22.345278    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:22.345344    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:22.355676    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:22.355695    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:22.355701    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:22.370522    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:22.370532    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:22.384728    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:22.384738    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:22.421782    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:22.421795    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:22.426343    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:22.426351    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:22.443102    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:22.443118    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:22.454544    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:22.454561    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:22.466723    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:22.466734    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:22.483537    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:22.483547    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:22.522280    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:22.522291    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:22.539969    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:22.539980    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:22.563634    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:22.563643    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:22.581039    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:22.581051    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:22.616736    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:22.616747    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:22.630420    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:22.630428    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:22.645007    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:22.645017    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:22.660395    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:22.660406    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:22.859166    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:22.859338    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:22.870898    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:22.870987    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:22.881999    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:22.882080    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:22.896811    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:25:22.896891    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:22.908008    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:22.908094    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:22.918661    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:22.918742    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:22.930297    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:22.930371    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:22.940203    4610 logs.go:276] 0 containers: []
	W0919 12:25:22.940215    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:22.940287    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:22.950742    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:22.950758    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:22.950763    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:22.965615    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:22.965625    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:22.977359    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:22.977369    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:22.998646    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:22.998654    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:23.011474    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:23.011485    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:23.036034    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:23.036049    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:23.068986    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:23.068994    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:23.073144    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:23.073150    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:23.087019    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:23.087033    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:23.099119    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:23.099136    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:23.110906    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:23.110918    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:23.145915    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:23.145925    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:23.160811    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:23.160821    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:25.679759    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:25.173854    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:30.680131    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:30.680240    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:30.691193    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:30.691272    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:30.701887    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:30.701976    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:30.712359    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:25:30.712440    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:30.723149    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:30.723233    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:30.734250    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:30.734333    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:30.744956    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:30.745037    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:30.755349    4610 logs.go:276] 0 containers: []
	W0919 12:25:30.755363    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:30.755434    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:30.765861    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:30.765876    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:30.765882    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:30.801278    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:30.801288    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:30.816718    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:30.816729    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:30.829611    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:30.829626    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:30.841217    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:30.841230    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:30.858409    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:30.858421    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:30.869455    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:30.869465    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:30.893328    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:30.893337    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:30.897685    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:30.897691    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:30.909397    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:30.909412    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:30.923779    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:30.923792    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:30.935339    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:30.935350    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:30.950138    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:30.950147    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:30.176107    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:30.176562    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:30.207946    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:30.208101    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:30.226238    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:30.226356    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:30.242255    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:30.242349    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:30.254137    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:30.254225    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:30.264625    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:30.264708    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:30.275274    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:30.275360    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:30.286236    4788 logs.go:276] 0 containers: []
	W0919 12:25:30.286250    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:30.286327    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:30.296979    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:30.296998    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:30.297003    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:30.311527    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:30.311538    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:30.329112    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:30.329124    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:30.340638    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:30.340652    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:30.364944    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:30.364952    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:30.405230    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:30.405242    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:30.419939    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:30.419953    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:30.433186    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:30.433198    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:30.455831    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:30.455846    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:30.460220    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:30.460226    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:30.471064    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:30.471075    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:30.482472    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:30.482485    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:30.496429    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:30.496444    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:30.536729    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:30.536748    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:30.575659    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:30.575676    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:30.587843    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:30.587854    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:30.599208    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:30.599218    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:33.115522    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:33.485693    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:38.118114    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:38.118416    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:38.150215    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:38.150329    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:38.165237    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:38.165336    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:38.177346    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:38.177429    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:38.188168    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:38.188252    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:38.198804    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:38.198892    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:38.209356    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:38.209446    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:38.220815    4788 logs.go:276] 0 containers: []
	W0919 12:25:38.220826    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:38.220902    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:38.235431    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:38.235453    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:38.235459    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:38.248325    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:38.248338    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:38.283120    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:38.283135    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:38.307247    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:38.307255    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:38.318771    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:38.318781    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:38.332829    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:38.332840    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:38.343863    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:38.343873    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:38.358621    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:38.358636    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:38.371059    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:38.371072    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:38.397968    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:38.397978    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:38.409828    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:38.409838    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:38.414127    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:38.414133    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:38.452351    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:38.452370    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:38.466061    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:38.466071    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:38.480152    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:38.480161    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:38.494306    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:38.494322    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:38.510917    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:38.510926    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:38.487733    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:38.487845    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:38.499647    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:38.499737    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:38.510775    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:38.510860    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:38.523205    4610 logs.go:276] 2 containers: [201ff29b5789 62f159c99517]
	I0919 12:25:38.523289    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:38.534647    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:38.534735    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:38.546539    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:38.546633    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:38.558929    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:38.559010    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:38.569516    4610 logs.go:276] 0 containers: []
	W0919 12:25:38.569530    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:38.569602    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:38.580240    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:38.580254    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:38.580260    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:38.615519    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:38.615531    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:38.627837    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:38.627851    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:38.643396    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:38.643407    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:38.654784    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:38.654795    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:38.665970    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:38.665981    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:38.684028    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:38.684039    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:38.707247    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:38.707254    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:38.741064    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:38.741078    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:38.745457    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:38.745466    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:38.759614    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:38.759628    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:38.773597    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:38.773612    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:38.785656    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:38.785671    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:41.299152    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:41.052523    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:46.299322    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:46.299428    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:46.311664    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:46.311754    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:46.324123    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:46.324212    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:46.338985    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:25:46.339077    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:46.350048    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:46.350136    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:46.362098    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:46.362179    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:46.373613    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:46.373704    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:46.385056    4610 logs.go:276] 0 containers: []
	W0919 12:25:46.385069    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:46.385147    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:46.396368    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:46.396387    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:46.396393    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:46.433730    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:46.433742    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:46.449728    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:25:46.449739    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:25:46.461751    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:25:46.461760    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:25:46.474151    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:46.474163    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:46.491450    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:46.491463    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:46.502966    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:46.502981    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:46.536334    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:46.536344    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:46.558456    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:46.558471    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:46.570512    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:46.570522    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:46.583297    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:46.583308    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:46.595115    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:46.595128    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:46.607072    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:46.607087    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:46.632042    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:46.632049    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:46.636367    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:46.636377    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:46.054683    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:46.054952    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:46.075527    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:46.075642    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:46.096497    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:46.096582    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:46.107622    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:46.107703    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:46.117930    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:46.118012    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:46.128228    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:46.128313    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:46.138629    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:46.138702    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:46.148599    4788 logs.go:276] 0 containers: []
	W0919 12:25:46.148613    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:46.148688    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:46.158963    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:46.158979    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:46.158985    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:46.174253    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:46.174264    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:46.188689    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:46.188700    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:46.203799    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:46.203810    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:46.214931    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:46.214942    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:46.226218    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:46.226230    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:46.237834    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:46.237847    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:46.275753    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:46.275766    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:46.313685    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:46.313695    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:46.325963    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:46.325974    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:46.344422    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:46.344440    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:46.359283    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:46.359295    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:46.363670    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:46.363680    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:46.401244    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:46.401257    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:46.425216    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:46.425232    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:46.444017    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:46.444030    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:46.460201    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:46.460212    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:48.974895    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:49.153111    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:53.977474    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:53.977686    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:53.995043    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:53.995150    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:54.008260    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:54.008351    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:54.019673    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:54.019757    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:54.030486    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:54.030577    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:54.041644    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:54.041732    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:54.052323    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:54.052413    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:54.063510    4788 logs.go:276] 0 containers: []
	W0919 12:25:54.063522    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:54.063589    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:54.073919    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:54.073934    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:54.073939    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:54.108073    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:54.108088    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:54.145280    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:54.145290    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:54.161117    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:54.161135    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:54.186864    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:54.186877    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:54.199443    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:54.199459    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:54.217946    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:54.217955    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:54.230936    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:54.230947    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:54.272131    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:54.272146    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:54.291151    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:54.291165    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:54.308151    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:54.308170    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:54.313174    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:54.313183    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:54.328616    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:54.328632    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:54.340997    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:54.341011    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:54.353432    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:54.353444    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:54.369213    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:54.369228    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:54.382732    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:54.382744    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:54.155225    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:54.155327    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:54.167217    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:25:54.167305    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:54.178643    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:25:54.178719    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:54.189702    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:25:54.189812    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:54.204540    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:25:54.204622    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:54.215611    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:25:54.215691    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:54.227643    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:25:54.227728    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:54.243627    4610 logs.go:276] 0 containers: []
	W0919 12:25:54.243639    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:54.243719    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:54.254593    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:25:54.254610    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:54.254616    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:54.259489    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:54.259498    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:54.296669    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:25:54.296682    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:25:54.311753    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:25:54.311762    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:25:54.324769    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:25:54.324781    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:25:54.341279    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:25:54.341288    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:25:54.358000    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:25:54.358012    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:54.370641    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:54.370651    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:54.407920    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:25:54.407937    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:25:54.419458    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:54.419469    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:54.443300    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:25:54.443317    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:25:54.461260    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:25:54.461271    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:25:54.475573    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:25:54.475584    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:25:54.487666    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:25:54.487682    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:25:54.502407    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:25:54.502420    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:25:57.016433    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:56.896931    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:02.018821    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:02.018923    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:02.031007    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:02.031091    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:02.042439    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:02.042524    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:02.054114    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:02.054200    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:02.071586    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:02.071668    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:02.083360    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:02.083439    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:02.094233    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:02.094317    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:02.105272    4610 logs.go:276] 0 containers: []
	W0919 12:26:02.105287    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:02.105360    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:02.116433    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:02.116451    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:02.116456    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:02.132109    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:02.132120    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:02.144626    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:02.144638    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:02.158765    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:02.158778    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:02.171654    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:02.171668    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:02.206945    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:02.206961    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:02.251157    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:02.251168    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:02.278278    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:02.278294    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:02.283587    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:02.283598    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:02.296211    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:02.296224    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:02.314614    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:02.314625    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:02.328382    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:02.328396    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:02.342512    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:02.342525    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:02.359239    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:02.359251    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:02.374595    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:02.374606    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:01.899169    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:01.899371    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:01.912399    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:01.912495    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:01.923975    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:01.924054    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:01.934933    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:01.935022    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:01.945389    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:01.945467    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:01.955925    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:01.955996    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:01.966530    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:01.966595    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:01.976979    4788 logs.go:276] 0 containers: []
	W0919 12:26:01.976993    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:01.977069    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:01.988095    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:01.988112    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:01.988118    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:02.002107    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:02.002119    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:02.013292    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:02.013303    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:02.029027    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:02.029041    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:02.041699    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:02.041714    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:02.054614    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:02.054622    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:02.067465    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:02.067481    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:02.109000    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:02.109011    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:02.113777    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:02.113787    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:02.132756    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:02.132766    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:02.150965    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:02.150982    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:02.169846    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:02.169864    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:02.210639    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:02.210651    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:02.230849    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:02.230861    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:02.248247    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:02.248260    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:02.285673    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:02.285683    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:02.298770    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:02.298785    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:04.887973    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:04.824314    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:09.890020    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:09.890136    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:09.901702    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:09.901794    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:09.913678    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:09.913762    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:09.926649    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:09.926742    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:09.938864    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:09.938946    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:09.950371    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:09.950453    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:09.962197    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:09.962283    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:09.973952    4610 logs.go:276] 0 containers: []
	W0919 12:26:09.973965    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:09.974042    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:09.985969    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:09.985987    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:09.985993    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:09.990878    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:09.990887    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:10.003709    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:10.003721    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:10.016157    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:10.016172    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:10.043711    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:10.043727    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:10.086422    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:10.086436    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:10.112698    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:10.112714    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:10.133149    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:10.133161    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:10.145914    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:10.145927    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:10.161699    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:10.161715    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:10.178147    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:10.178160    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:10.191069    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:10.191080    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:10.203029    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:10.203040    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:10.240607    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:10.240629    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:10.253207    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:10.253218    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:09.826400    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:09.826541    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:09.838099    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:09.838187    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:09.848591    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:09.848677    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:09.859424    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:09.859503    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:09.869979    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:09.870065    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:09.880557    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:09.880644    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:09.891486    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:09.891552    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:09.902325    4788 logs.go:276] 0 containers: []
	W0919 12:26:09.902332    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:09.902371    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:09.913571    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:09.913589    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:09.913596    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:09.951473    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:09.951482    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:09.966584    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:09.966604    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:09.979052    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:09.979064    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:09.998062    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:09.998078    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:10.022971    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:10.022986    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:10.036828    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:10.036840    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:10.077447    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:10.077467    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:10.097188    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:10.097201    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:10.138374    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:10.138388    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:10.154084    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:10.154100    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:10.167341    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:10.167355    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:10.185869    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:10.185886    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:10.200777    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:10.200794    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:10.217078    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:10.217093    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:10.229432    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:10.229444    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:10.242491    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:10.242503    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:12.749306    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:12.767702    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:17.750893    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:17.751226    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:17.785415    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:17.785526    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:17.813055    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:17.813148    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:17.835161    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:17.835363    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:17.852517    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:17.852704    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:17.864531    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:17.864618    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:17.875748    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:17.875836    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:17.886705    4788 logs.go:276] 0 containers: []
	W0919 12:26:17.886717    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:17.886794    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:17.898106    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:17.898123    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:17.898129    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:17.920746    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:17.920755    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:17.959528    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:17.959542    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:17.975408    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:17.975417    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:18.015603    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:18.015614    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:18.031608    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:18.031617    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:18.046585    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:18.046601    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:18.051189    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:18.051203    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:18.064216    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:18.064229    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:18.077194    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:18.077208    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:18.089572    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:18.089583    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:18.108219    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:18.108228    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:18.128838    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:18.128847    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:18.149846    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:18.149859    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:18.165909    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:18.165921    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:18.201021    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:18.201034    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:18.212291    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:18.212304    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:17.769887    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:17.770120    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:17.790994    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:17.791108    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:17.805471    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:17.805566    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:17.831761    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:17.831858    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:17.844708    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:17.844779    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:17.856619    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:17.856697    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:17.868018    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:17.868099    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:17.879576    4610 logs.go:276] 0 containers: []
	W0919 12:26:17.879587    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:17.879655    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:17.891566    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:17.891584    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:17.891590    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:17.926764    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:17.926777    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:17.941507    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:17.941517    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:17.961247    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:17.961257    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:17.975042    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:17.975055    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:17.990986    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:17.990998    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:18.016624    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:18.016633    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:18.030112    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:18.030123    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:18.068258    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:18.068270    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:18.080504    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:18.080514    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:18.093321    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:18.093332    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:18.106041    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:18.106056    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:18.110844    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:18.110858    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:18.126677    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:18.126693    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:18.139073    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:18.139088    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:20.654728    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:20.726147    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:25.657203    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:25.657647    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:25.690090    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:25.690251    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:25.708615    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:25.708734    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:25.722575    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:25.722680    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:25.734906    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:25.734998    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:25.747607    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:25.747697    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:25.759844    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:25.759931    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:25.772152    4610 logs.go:276] 0 containers: []
	W0919 12:26:25.772164    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:25.772244    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:25.783799    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:25.783816    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:25.783822    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:25.802045    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:25.802057    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:25.819297    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:25.819310    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:25.846630    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:25.846644    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:25.861458    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:25.861467    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:25.866827    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:25.866836    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:25.902836    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:25.902846    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:25.915279    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:25.915291    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:25.931538    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:25.931555    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:25.944949    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:25.944960    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:25.958346    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:25.958358    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:25.981939    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:25.981947    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:25.994919    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:25.994929    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:26.010004    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:26.010016    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:26.047802    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:26.047823    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:25.728432    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:25.728512    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:25.740807    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:25.740895    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:25.752386    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:25.752513    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:25.764204    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:25.764290    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:25.777007    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:25.777094    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:25.788470    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:25.788566    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:25.804435    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:25.804521    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:25.814871    4788 logs.go:276] 0 containers: []
	W0919 12:26:25.814882    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:25.814962    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:25.826347    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:25.826366    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:25.826374    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:25.840137    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:25.840150    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:25.858658    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:25.858670    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:25.900407    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:25.900429    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:25.905486    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:25.905499    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:25.917863    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:25.917877    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:25.932557    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:25.932566    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:25.947442    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:25.947451    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:25.963710    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:25.963726    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:25.980131    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:25.980143    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:26.006551    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:26.006569    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:26.019375    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:26.019390    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:26.040029    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:26.040039    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:26.054942    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:26.054959    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:26.067777    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:26.067790    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:26.103740    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:26.103754    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:26.118840    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:26.118850    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:28.659095    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:28.570350    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:33.659553    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:33.659631    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:33.671221    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:33.671315    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:33.682956    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:33.683048    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:33.694396    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:33.694481    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:33.706691    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:33.706774    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:33.719138    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:33.719216    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:33.730713    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:33.730805    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:33.741968    4788 logs.go:276] 0 containers: []
	W0919 12:26:33.741979    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:33.742057    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:33.753298    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:33.753317    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:33.753324    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:33.768670    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:33.768683    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:33.782114    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:33.782127    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:33.798234    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:33.798245    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:33.810526    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:33.810538    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:33.825396    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:33.825412    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:33.837947    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:33.837958    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:33.857655    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:33.857666    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:33.871340    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:33.871352    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:33.913811    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:33.913833    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:33.919381    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:33.919394    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:33.960094    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:33.960106    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:33.974931    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:33.974946    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:33.990514    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:33.990529    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:34.014725    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:34.014733    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:34.050433    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:34.050448    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:34.062135    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:34.062150    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:33.572966    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:33.573603    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:33.615346    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:33.615506    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:33.638344    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:33.638468    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:33.657680    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:33.657777    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:33.676902    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:33.676991    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:33.688745    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:33.688832    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:33.700967    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:33.701057    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:33.712379    4610 logs.go:276] 0 containers: []
	W0919 12:26:33.712393    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:33.712465    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:33.724608    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:33.724628    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:33.724634    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:33.764497    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:33.764509    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:33.777206    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:33.777219    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:33.793615    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:33.793628    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:33.806500    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:33.806512    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:33.822202    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:33.822216    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:33.838540    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:33.838549    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:33.865193    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:33.865207    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:33.885223    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:33.885234    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:33.891978    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:33.891989    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:33.904195    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:33.904205    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:33.916633    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:33.916644    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:33.929371    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:33.929384    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:33.965249    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:33.965262    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:33.979102    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:33.979114    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:36.500022    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:36.576279    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:41.501206    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:41.501487    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:41.528003    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:41.528123    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:41.542838    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:41.542919    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:41.555720    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:41.555814    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:41.566527    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:41.566615    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:41.576943    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:41.576986    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:41.588520    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:41.588568    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:41.599935    4610 logs.go:276] 0 containers: []
	W0919 12:26:41.599949    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:41.600028    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:41.611496    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:41.611516    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:41.611522    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:41.647681    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:41.647703    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:41.673524    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:41.673536    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:41.689947    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:41.689961    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:41.704194    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:41.704207    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:41.718384    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:41.718396    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:41.735004    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:41.735017    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:41.748784    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:41.748797    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:41.762176    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:41.762187    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:41.783324    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:41.783340    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:41.796622    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:41.796633    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:41.801938    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:41.801947    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:41.841290    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:41.841301    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:41.856418    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:41.856428    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:41.873469    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:41.873481    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:41.576870    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:41.576985    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:41.588205    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:41.588291    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:41.600026    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:41.600071    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:41.613000    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:41.613079    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:41.626157    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:41.626244    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:41.637468    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:41.637556    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:41.655449    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:41.655540    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:41.668610    4788 logs.go:276] 0 containers: []
	W0919 12:26:41.668621    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:41.668699    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:41.679652    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:41.679671    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:41.679677    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:41.698522    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:41.698534    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:41.711280    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:41.711293    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:41.724413    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:41.724427    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:41.728603    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:41.728614    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:41.740496    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:41.740512    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:41.756716    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:41.756730    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:41.772665    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:41.772686    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:41.785489    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:41.785500    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:41.825604    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:41.825627    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:41.865740    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:41.865762    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:41.885641    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:41.885657    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:41.922416    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:41.922427    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:41.939593    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:41.939608    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:41.965796    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:41.965817    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:41.987292    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:41.987306    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:42.000805    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:42.000820    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:44.514361    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:44.399448    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:49.514466    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:49.514566    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:49.525965    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:49.526050    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:49.537315    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:49.537407    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:49.401633    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:49.401939    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:49.429286    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:49.429439    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:49.447099    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:49.447219    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:49.460676    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:49.460769    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:49.471835    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:49.471919    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:49.483069    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:49.483139    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:49.493830    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:49.493913    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:49.503906    4610 logs.go:276] 0 containers: []
	W0919 12:26:49.503920    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:49.503993    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:49.514188    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:49.514207    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:49.514213    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:49.527418    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:49.527429    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:49.539834    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:49.539846    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:49.552747    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:49.552759    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:49.572777    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:49.572786    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:49.599709    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:49.599723    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:49.604847    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:49.604857    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:49.643106    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:49.643119    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:49.658814    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:49.658824    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:49.695656    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:49.695675    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:49.710593    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:49.710614    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:49.723170    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:49.723182    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:49.736441    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:49.736452    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:49.750272    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:49.750284    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:49.763334    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:49.763346    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:26:52.284428    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:49.548371    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:49.548455    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:49.559693    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:49.559787    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:49.571410    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:49.571488    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:49.582678    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:49.582764    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:49.597921    4788 logs.go:276] 0 containers: []
	W0919 12:26:49.597933    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:49.598008    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:49.609990    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:49.610011    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:49.610016    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:49.626559    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:49.626572    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:49.639153    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:49.639166    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:49.657067    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:49.657083    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:49.702565    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:49.702579    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:49.719375    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:49.719388    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:49.732469    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:49.732481    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:49.745449    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:49.745462    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:49.785155    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:49.785171    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:49.799235    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:49.799246    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:49.818635    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:49.818646    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:49.842415    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:49.842425    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:49.847030    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:49.847037    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:49.864243    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:49.864257    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:49.883008    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:49.883023    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:49.896667    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:49.896677    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:49.930419    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:49.930430    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:52.447923    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:57.285985    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:57.286267    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:57.306475    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:26:57.306601    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:57.321713    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:26:57.321801    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:57.334270    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:26:57.334363    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:57.344994    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:26:57.345088    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:57.355613    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:26:57.355698    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:57.366115    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:26:57.366200    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:57.376402    4610 logs.go:276] 0 containers: []
	W0919 12:26:57.376413    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:57.376485    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:57.387165    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:26:57.387181    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:26:57.387186    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:26:57.402273    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:57.402284    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:57.435445    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:26:57.435455    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:26:57.447088    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:26:57.447100    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:26:57.459999    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:26:57.460011    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:57.472702    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:26:57.472715    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:26:57.488283    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:26:57.488295    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:26:57.501569    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:26:57.501584    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:26:57.514179    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:57.514189    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:57.450084    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:57.450131    4788 kubeadm.go:597] duration metric: took 4m4.2656165s to restartPrimaryControlPlane
	W0919 12:26:57.450160    4788 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0919 12:26:57.450175    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0919 12:26:58.445905    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 12:26:58.451425    4788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 12:26:58.454186    4788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 12:26:58.457005    4788 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 12:26:58.457011    4788 kubeadm.go:157] found existing configuration files:
	
	I0919 12:26:58.457039    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf
	I0919 12:26:58.459508    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 12:26:58.459538    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 12:26:58.461962    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf
	I0919 12:26:58.464698    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 12:26:58.464725    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 12:26:58.467223    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf
	I0919 12:26:58.469978    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 12:26:58.470003    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 12:26:58.473190    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf
	I0919 12:26:58.475943    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 12:26:58.475971    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 12:26:58.478457    4788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 12:26:58.495279    4788 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0919 12:26:58.495340    4788 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 12:26:58.543136    4788 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 12:26:58.543188    4788 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 12:26:58.543231    4788 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 12:26:58.592453    4788 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 12:26:58.600631    4788 out.go:235]   - Generating certificates and keys ...
	I0919 12:26:58.600664    4788 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 12:26:58.600704    4788 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 12:26:58.600754    4788 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 12:26:58.600793    4788 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0919 12:26:58.600828    4788 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 12:26:58.600860    4788 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0919 12:26:58.600894    4788 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0919 12:26:58.600930    4788 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0919 12:26:58.600967    4788 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 12:26:58.601003    4788 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 12:26:58.601023    4788 kubeadm.go:310] [certs] Using the existing "sa" key
	I0919 12:26:58.601060    4788 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 12:26:58.673442    4788 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 12:26:58.854254    4788 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 12:26:58.919295    4788 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 12:26:59.136297    4788 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 12:26:59.166305    4788 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 12:26:59.166827    4788 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 12:26:59.166950    4788 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 12:26:59.243951    4788 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 12:26:59.248165    4788 out.go:235]   - Booting up control plane ...
	I0919 12:26:59.248229    4788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 12:26:59.248286    4788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 12:26:59.248329    4788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 12:26:59.248409    4788 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 12:26:59.248488    4788 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 12:26:57.551769    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:26:57.551785    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:26:57.566865    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:26:57.566875    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:26:57.582414    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:26:57.582425    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:26:57.599714    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:57.599733    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:57.626239    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:57.626255    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:57.631422    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:26:57.631434    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:27:00.149610    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:04.249880    4788 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002629 seconds
	I0919 12:27:04.249960    4788 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 12:27:04.254044    4788 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 12:27:04.767823    4788 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 12:27:04.768088    4788 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-269000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 12:27:05.273090    4788 kubeadm.go:310] [bootstrap-token] Using token: gqikgj.g8ry9h3d1m1lhgda
	I0919 12:27:05.276261    4788 out.go:235]   - Configuring RBAC rules ...
	I0919 12:27:05.276319    4788 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 12:27:05.276359    4788 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 12:27:05.282493    4788 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 12:27:05.283664    4788 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 12:27:05.284875    4788 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 12:27:05.285982    4788 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 12:27:05.290095    4788 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 12:27:05.482907    4788 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 12:27:05.676810    4788 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 12:27:05.677409    4788 kubeadm.go:310] 
	I0919 12:27:05.677442    4788 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 12:27:05.677445    4788 kubeadm.go:310] 
	I0919 12:27:05.677480    4788 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 12:27:05.677483    4788 kubeadm.go:310] 
	I0919 12:27:05.677496    4788 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 12:27:05.677531    4788 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 12:27:05.677559    4788 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 12:27:05.677564    4788 kubeadm.go:310] 
	I0919 12:27:05.677592    4788 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 12:27:05.677596    4788 kubeadm.go:310] 
	I0919 12:27:05.677622    4788 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 12:27:05.677626    4788 kubeadm.go:310] 
	I0919 12:27:05.677650    4788 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 12:27:05.677686    4788 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 12:27:05.677724    4788 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 12:27:05.677727    4788 kubeadm.go:310] 
	I0919 12:27:05.677769    4788 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 12:27:05.677809    4788 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 12:27:05.677812    4788 kubeadm.go:310] 
	I0919 12:27:05.677852    4788 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gqikgj.g8ry9h3d1m1lhgda \
	I0919 12:27:05.677902    4788 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e0c2857de0258e65a9bba263f6157106d84e898a6b55abbe378b8f48b6c815 \
	I0919 12:27:05.677913    4788 kubeadm.go:310] 	--control-plane 
	I0919 12:27:05.677919    4788 kubeadm.go:310] 
	I0919 12:27:05.677963    4788 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 12:27:05.677969    4788 kubeadm.go:310] 
	I0919 12:27:05.678020    4788 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gqikgj.g8ry9h3d1m1lhgda \
	I0919 12:27:05.678078    4788 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e0c2857de0258e65a9bba263f6157106d84e898a6b55abbe378b8f48b6c815 
	I0919 12:27:05.678254    4788 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 12:27:05.678311    4788 cni.go:84] Creating CNI manager for ""
	I0919 12:27:05.678320    4788 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:27:05.684593    4788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 12:27:05.687931    4788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 12:27:05.690791    4788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 12:27:05.695933    4788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 12:27:05.696005    4788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-269000 minikube.k8s.io/updated_at=2024_09_19T12_27_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=stopped-upgrade-269000 minikube.k8s.io/primary=true
	I0919 12:27:05.696007    4788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 12:27:05.732025    4788 kubeadm.go:1113] duration metric: took 36.062583ms to wait for elevateKubeSystemPrivileges
	I0919 12:27:05.740905    4788 ops.go:34] apiserver oom_adj: -16
	I0919 12:27:05.740921    4788 kubeadm.go:394] duration metric: took 4m12.572005166s to StartCluster
	I0919 12:27:05.740934    4788 settings.go:142] acquiring lock: {Name:mk40c96dc3647741b89517369d27068bccfc0e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:27:05.741027    4788 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:27:05.741444    4788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/kubeconfig: {Name:mk8a8f1f5779f30829ec51973ad05815f1640da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:27:05.742012    4788 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:27:05.742037    4788 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 12:27:05.742073    4788 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-269000"
	I0919 12:27:05.742091    4788 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-269000"
	I0919 12:27:05.742093    4788 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-269000"
	W0919 12:27:05.742096    4788 addons.go:243] addon storage-provisioner should already be in state true
	I0919 12:27:05.742097    4788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-269000"
	I0919 12:27:05.742100    4788 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:27:05.742109    4788 host.go:66] Checking if "stopped-upgrade-269000" exists ...
	I0919 12:27:05.743012    4788 kapi.go:59] client config for stopped-upgrade-269000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/client.key", CAFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104009800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 12:27:05.743136    4788 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-269000"
	W0919 12:27:05.743141    4788 addons.go:243] addon default-storageclass should already be in state true
	I0919 12:27:05.743147    4788 host.go:66] Checking if "stopped-upgrade-269000" exists ...
	I0919 12:27:05.746615    4788 out.go:177] * Verifying Kubernetes components...
	I0919 12:27:05.746937    4788 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 12:27:05.749715    4788 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 12:27:05.749723    4788 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/id_rsa Username:docker}
	I0919 12:27:05.753557    4788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:27:05.150724    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:05.150856    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:27:05.161993    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:27:05.162086    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:27:05.176310    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:27:05.176402    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:27:05.187191    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:27:05.187274    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:27:05.198287    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:27:05.198377    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:27:05.208566    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:27:05.208641    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:27:05.219272    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:27:05.219357    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:27:05.230055    4610 logs.go:276] 0 containers: []
	W0919 12:27:05.230070    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:27:05.230146    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:27:05.240730    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:27:05.240750    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:27:05.240755    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:27:05.252906    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:27:05.252919    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:27:05.264484    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:27:05.264494    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:27:05.281178    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:27:05.281190    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:27:05.293977    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:27:05.293989    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:27:05.312556    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:27:05.312567    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:27:05.317265    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:27:05.317273    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:27:05.331853    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:27:05.331862    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:27:05.344316    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:27:05.344328    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:27:05.360149    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:27:05.360160    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:27:05.396734    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:27:05.396746    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:27:05.408745    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:27:05.408756    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:27:05.432033    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:27:05.432040    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:27:05.465658    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:27:05.465680    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:27:05.480670    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:27:05.480685    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:27:05.757624    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:27:05.761609    4788 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 12:27:05.761615    4788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 12:27:05.761622    4788 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/id_rsa Username:docker}
	I0919 12:27:05.851506    4788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 12:27:05.856410    4788 api_server.go:52] waiting for apiserver process to appear ...
	I0919 12:27:05.856460    4788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:27:05.860089    4788 api_server.go:72] duration metric: took 118.068709ms to wait for apiserver process to appear ...
	I0919 12:27:05.860097    4788 api_server.go:88] waiting for apiserver healthz status ...
	I0919 12:27:05.860104    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:05.865749    4788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 12:27:05.889700    4788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 12:27:06.216690    4788 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 12:27:06.216703    4788 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 12:27:07.996607    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:10.861807    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:10.861847    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:12.998702    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:12.998908    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:27:13.011272    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:27:13.011367    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:27:13.021857    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:27:13.021937    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:27:13.033494    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:27:13.033585    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:27:13.044096    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:27:13.044179    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:27:13.055052    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:27:13.055142    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:27:13.065208    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:27:13.065304    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:27:13.075777    4610 logs.go:276] 0 containers: []
	W0919 12:27:13.075789    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:27:13.075868    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:27:13.086299    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:27:13.086321    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:27:13.086326    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:27:13.119624    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:27:13.119632    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:27:13.131516    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:27:13.131527    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:27:13.143181    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:27:13.143192    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:27:13.156262    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:27:13.156275    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:27:13.167758    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:27:13.167766    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:27:13.203061    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:27:13.203078    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:27:13.217164    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:27:13.217176    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:27:13.229149    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:27:13.229165    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:27:13.252790    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:27:13.252802    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:27:13.264069    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:27:13.264080    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:27:13.268783    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:27:13.268789    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:27:13.284146    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:27:13.284156    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:27:13.299388    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:27:13.299397    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:27:13.311177    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:27:13.311187    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:27:15.830718    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:15.861924    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:15.861942    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:20.832793    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:20.832966    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:27:20.847056    4610 logs.go:276] 1 containers: [1c6906813130]
	I0919 12:27:20.847145    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:27:20.858941    4610 logs.go:276] 1 containers: [c296493a7727]
	I0919 12:27:20.859036    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:27:20.871209    4610 logs.go:276] 4 containers: [aabc98abced0 1589e8a1a78c 201ff29b5789 62f159c99517]
	I0919 12:27:20.871302    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:27:20.883429    4610 logs.go:276] 1 containers: [4788575dac29]
	I0919 12:27:20.883522    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:27:20.895337    4610 logs.go:276] 1 containers: [96d083c691b9]
	I0919 12:27:20.895432    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:27:20.907442    4610 logs.go:276] 1 containers: [e926b08e8484]
	I0919 12:27:20.907534    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:27:20.925846    4610 logs.go:276] 0 containers: []
	W0919 12:27:20.925859    4610 logs.go:278] No container was found matching "kindnet"
	I0919 12:27:20.925935    4610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:27:20.936370    4610 logs.go:276] 1 containers: [98cf853f876a]
	I0919 12:27:20.936387    4610 logs.go:123] Gathering logs for kubelet ...
	I0919 12:27:20.936393    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:27:20.972463    4610 logs.go:123] Gathering logs for dmesg ...
	I0919 12:27:20.972474    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:27:20.977020    4610 logs.go:123] Gathering logs for coredns [201ff29b5789] ...
	I0919 12:27:20.977027    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 201ff29b5789"
	I0919 12:27:20.989262    4610 logs.go:123] Gathering logs for kube-scheduler [4788575dac29] ...
	I0919 12:27:20.989273    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4788575dac29"
	I0919 12:27:21.003931    4610 logs.go:123] Gathering logs for kube-proxy [96d083c691b9] ...
	I0919 12:27:21.003941    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96d083c691b9"
	I0919 12:27:21.016052    4610 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:27:21.016063    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:27:21.050974    4610 logs.go:123] Gathering logs for kube-apiserver [1c6906813130] ...
	I0919 12:27:21.050990    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c6906813130"
	I0919 12:27:21.065488    4610 logs.go:123] Gathering logs for etcd [c296493a7727] ...
	I0919 12:27:21.065499    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c296493a7727"
	I0919 12:27:21.080071    4610 logs.go:123] Gathering logs for coredns [aabc98abced0] ...
	I0919 12:27:21.080082    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aabc98abced0"
	I0919 12:27:21.091570    4610 logs.go:123] Gathering logs for coredns [1589e8a1a78c] ...
	I0919 12:27:21.091580    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1589e8a1a78c"
	I0919 12:27:21.103062    4610 logs.go:123] Gathering logs for kube-controller-manager [e926b08e8484] ...
	I0919 12:27:21.103073    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e926b08e8484"
	I0919 12:27:21.120808    4610 logs.go:123] Gathering logs for Docker ...
	I0919 12:27:21.120818    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:27:21.144873    4610 logs.go:123] Gathering logs for coredns [62f159c99517] ...
	I0919 12:27:21.144888    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f159c99517"
	I0919 12:27:21.159771    4610 logs.go:123] Gathering logs for storage-provisioner [98cf853f876a] ...
	I0919 12:27:21.159782    4610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98cf853f876a"
	I0919 12:27:21.171868    4610 logs.go:123] Gathering logs for container status ...
	I0919 12:27:21.171880    4610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:27:20.862433    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:20.862457    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:23.685071    4610 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:28.687317    4610 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:28.692711    4610 out.go:201] 
	W0919 12:27:28.695886    4610 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0919 12:27:28.695896    4610 out.go:270] * 
	W0919 12:27:28.696727    4610 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:27:28.707859    4610 out.go:201] 
	I0919 12:27:25.862687    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:25.862742    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:30.863181    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:30.863210    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:35.863769    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:35.863811    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0919 12:27:36.218201    4788 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0919 12:27:36.227315    4788 out.go:177] * Enabled addons: storage-provisioner
	I0919 12:27:36.234522    4788 addons.go:510] duration metric: took 30.493432917s for enable addons: enabled=[storage-provisioner]
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-09-19 19:18:42 UTC, ends at Thu 2024-09-19 19:27:44 UTC. --
	Sep 19 19:27:29 running-upgrade-356000 dockerd[2891]: time="2024-09-19T19:27:29.310695868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 19:27:29 running-upgrade-356000 dockerd[2891]: time="2024-09-19T19:27:29.310783780Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/11628b7bd9bb1fe7608619d5e8ef7d7e2e87523e071b8bbaa56b27e033928dc0 pid=18431 runtime=io.containerd.runc.v2
	Sep 19 19:27:29 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:29Z" level=error msg="ContainerStats resp: {0x4000859380 linux}"
	Sep 19 19:27:29 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:29Z" level=error msg="ContainerStats resp: {0x4000859b40 linux}"
	Sep 19 19:27:30 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:30Z" level=error msg="ContainerStats resp: {0x4000894140 linux}"
	Sep 19 19:27:31 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:31Z" level=error msg="ContainerStats resp: {0x400094d540 linux}"
	Sep 19 19:27:31 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:31Z" level=error msg="ContainerStats resp: {0x400094d9c0 linux}"
	Sep 19 19:27:31 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:31Z" level=error msg="ContainerStats resp: {0x40008956c0 linux}"
	Sep 19 19:27:31 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:31Z" level=error msg="ContainerStats resp: {0x40001785c0 linux}"
	Sep 19 19:27:31 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:31Z" level=error msg="ContainerStats resp: {0x4000178700 linux}"
	Sep 19 19:27:31 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:31Z" level=error msg="ContainerStats resp: {0x4000178dc0 linux}"
	Sep 19 19:27:31 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:31Z" level=error msg="ContainerStats resp: {0x4000179380 linux}"
	Sep 19 19:27:32 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:32Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 19 19:27:37 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:37Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 19 19:27:41 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:41Z" level=error msg="ContainerStats resp: {0x40004c8280 linux}"
	Sep 19 19:27:41 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:41Z" level=error msg="ContainerStats resp: {0x40004c91c0 linux}"
	Sep 19 19:27:42 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:42Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 19 19:27:42 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:42Z" level=error msg="ContainerStats resp: {0x4000178180 linux}"
	Sep 19 19:27:43 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:43Z" level=error msg="ContainerStats resp: {0x4000894fc0 linux}"
	Sep 19 19:27:43 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:43Z" level=error msg="ContainerStats resp: {0x4000895640 linux}"
	Sep 19 19:27:43 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:43Z" level=error msg="ContainerStats resp: {0x4000179380 linux}"
	Sep 19 19:27:43 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:43Z" level=error msg="ContainerStats resp: {0x4000895ec0 linux}"
	Sep 19 19:27:43 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:43Z" level=error msg="ContainerStats resp: {0x4000179f80 linux}"
	Sep 19 19:27:43 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:43Z" level=error msg="ContainerStats resp: {0x4000894380 linux}"
	Sep 19 19:27:43 running-upgrade-356000 cri-dockerd[2733]: time="2024-09-19T19:27:43Z" level=error msg="ContainerStats resp: {0x4000858300 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	11628b7bd9bb1       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   5b8290e034b53
	2d4b327812b63       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   0075eb201a43a
	aabc98abced0e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   0075eb201a43a
	1589e8a1a78c9       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5b8290e034b53
	96d083c691b9a       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   cce55b0d7e5f4
	98cf853f876a8       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   4961310f634e7
	4788575dac29e       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   6912e9c6deb8f
	e926b08e8484a       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   e24f8c3c39fab
	1c6906813130f       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   a5c84e1dd9f72
	c296493a77275       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   be7b37865e45b
	
	
	==> coredns [11628b7bd9bb] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7613378809513837503.5486280278533690408. HINFO: read udp 10.244.0.2:35878->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7613378809513837503.5486280278533690408. HINFO: read udp 10.244.0.2:49619->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7613378809513837503.5486280278533690408. HINFO: read udp 10.244.0.2:57767->10.0.2.3:53: i/o timeout
	
	
	==> coredns [1589e8a1a78c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7485234791293139176.4730428496955281367. HINFO: read udp 10.244.0.2:42577->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7485234791293139176.4730428496955281367. HINFO: read udp 10.244.0.2:52638->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7485234791293139176.4730428496955281367. HINFO: read udp 10.244.0.2:43489->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7485234791293139176.4730428496955281367. HINFO: read udp 10.244.0.2:50617->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7485234791293139176.4730428496955281367. HINFO: read udp 10.244.0.2:60182->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7485234791293139176.4730428496955281367. HINFO: read udp 10.244.0.2:42935->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7485234791293139176.4730428496955281367. HINFO: read udp 10.244.0.2:37060->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7485234791293139176.4730428496955281367. HINFO: read udp 10.244.0.2:59369->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7485234791293139176.4730428496955281367. HINFO: read udp 10.244.0.2:46769->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7485234791293139176.4730428496955281367. HINFO: read udp 10.244.0.2:60028->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2d4b327812b6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3513127177357899662.422818612802449036. HINFO: read udp 10.244.0.3:49743->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3513127177357899662.422818612802449036. HINFO: read udp 10.244.0.3:41061->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3513127177357899662.422818612802449036. HINFO: read udp 10.244.0.3:56979->10.0.2.3:53: i/o timeout
	
	
	==> coredns [aabc98abced0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3787705453551784341.3716835465397057225. HINFO: read udp 10.244.0.3:45858->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3787705453551784341.3716835465397057225. HINFO: read udp 10.244.0.3:51259->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3787705453551784341.3716835465397057225. HINFO: read udp 10.244.0.3:51307->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3787705453551784341.3716835465397057225. HINFO: read udp 10.244.0.3:47632->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3787705453551784341.3716835465397057225. HINFO: read udp 10.244.0.3:55334->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3787705453551784341.3716835465397057225. HINFO: read udp 10.244.0.3:44686->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3787705453551784341.3716835465397057225. HINFO: read udp 10.244.0.3:59529->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3787705453551784341.3716835465397057225. HINFO: read udp 10.244.0.3:60477->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3787705453551784341.3716835465397057225. HINFO: read udp 10.244.0.3:55998->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3787705453551784341.3716835465397057225. HINFO: read udp 10.244.0.3:33460->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-356000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-356000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=running-upgrade-356000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T12_23_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:23:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-356000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:27:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:23:27 +0000   Thu, 19 Sep 2024 19:23:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:23:27 +0000   Thu, 19 Sep 2024 19:23:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:23:27 +0000   Thu, 19 Sep 2024 19:23:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:23:27 +0000   Thu, 19 Sep 2024 19:23:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-356000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 289a2a07a5974543b1e19102bc8fad5b
	  System UUID:                289a2a07a5974543b1e19102bc8fad5b
	  Boot ID:                    4c38a722-e94e-4613-b553-63f5ae1a4140
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-g8phh                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 coredns-6d4b75cb6d-rm4p9                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 etcd-running-upgrade-356000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-356000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-356000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-c4dmx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-356000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-356000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-356000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-356000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-356000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-356000 event: Registered Node running-upgrade-356000 in Controller
	
	
	==> dmesg <==
	[  +1.764492] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.083574] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.075957] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.136354] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.075476] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.066804] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.137409] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[Sep19 19:19] systemd-fstab-generator[1934]: Ignoring "noauto" for root device
	[  +2.541724] systemd-fstab-generator[2210]: Ignoring "noauto" for root device
	[  +0.154117] systemd-fstab-generator[2245]: Ignoring "noauto" for root device
	[  +0.088759] systemd-fstab-generator[2259]: Ignoring "noauto" for root device
	[  +0.093566] systemd-fstab-generator[2274]: Ignoring "noauto" for root device
	[  +1.591256] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.145380] systemd-fstab-generator[2690]: Ignoring "noauto" for root device
	[  +0.079944] systemd-fstab-generator[2701]: Ignoring "noauto" for root device
	[  +0.083527] systemd-fstab-generator[2712]: Ignoring "noauto" for root device
	[  +0.086281] systemd-fstab-generator[2726]: Ignoring "noauto" for root device
	[  +2.303299] systemd-fstab-generator[2878]: Ignoring "noauto" for root device
	[  +2.263001] systemd-fstab-generator[3226]: Ignoring "noauto" for root device
	[  +1.262241] systemd-fstab-generator[3482]: Ignoring "noauto" for root device
	[ +21.224169] kauditd_printk_skb: 68 callbacks suppressed
	[Sep19 19:23] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.657813] systemd-fstab-generator[11551]: Ignoring "noauto" for root device
	[  +5.638216] systemd-fstab-generator[12133]: Ignoring "noauto" for root device
	[  +0.449130] systemd-fstab-generator[12269]: Ignoring "noauto" for root device
	
	
	==> etcd [c296493a7727] <==
	{"level":"info","ts":"2024-09-19T19:23:23.471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-19T19:23:23.471Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-19T19:23:23.471Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-19T19:23:23.471Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-19T19:23:23.471Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-19T19:23:23.471Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-19T19:23:23.471Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-19T19:23:23.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-19T19:23:23.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-19T19:23:23.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-19T19:23:23.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-19T19:23:23.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-19T19:23:23.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-19T19:23:23.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-19T19:23:23.964Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-356000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T19:23:23.964Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T19:23:23.964Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T19:23:23.965Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-19T19:23:23.965Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T19:23:23.965Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T19:23:23.966Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T19:23:23.966Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T19:23:23.966Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T19:23:23.972Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T19:23:23.974Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:27:45 up 9 min,  0 users,  load average: 0.29, 0.32, 0.18
	Linux running-upgrade-356000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [1c6906813130] <==
	I0919 19:23:25.272950       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0919 19:23:25.272976       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0919 19:23:25.273010       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 19:23:25.273048       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 19:23:25.273063       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0919 19:23:25.273082       1 cache.go:39] Caches are synced for autoregister controller
	I0919 19:23:25.277318       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0919 19:23:26.004328       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0919 19:23:26.186216       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0919 19:23:26.191898       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0919 19:23:26.192111       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 19:23:26.345640       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 19:23:26.355396       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 19:23:26.434673       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0919 19:23:26.436515       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0919 19:23:26.436949       1 controller.go:611] quota admission added evaluator for: endpoints
	I0919 19:23:26.438214       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 19:23:27.308234       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0919 19:23:27.774061       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0919 19:23:27.777436       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0919 19:23:27.791631       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0919 19:23:27.842986       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 19:23:40.962039       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0919 19:23:41.063938       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0919 19:23:41.466555       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [e926b08e8484] <==
	I0919 19:23:40.221572       1 shared_informer.go:262] Caches are synced for TTL
	I0919 19:23:40.233501       1 shared_informer.go:262] Caches are synced for GC
	I0919 19:23:40.233531       1 shared_informer.go:262] Caches are synced for attach detach
	I0919 19:23:40.258246       1 shared_informer.go:262] Caches are synced for persistent volume
	I0919 19:23:40.258246       1 shared_informer.go:262] Caches are synced for cronjob
	I0919 19:23:40.260371       1 shared_informer.go:262] Caches are synced for taint
	I0919 19:23:40.260403       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0919 19:23:40.260423       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-356000. Assuming now as a timestamp.
	I0919 19:23:40.260438       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0919 19:23:40.260565       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0919 19:23:40.260577       1 event.go:294] "Event occurred" object="running-upgrade-356000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-356000 event: Registered Node running-upgrade-356000 in Controller"
	I0919 19:23:40.309986       1 shared_informer.go:262] Caches are synced for daemon sets
	I0919 19:23:40.311025       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0919 19:23:40.339156       1 shared_informer.go:262] Caches are synced for resource quota
	I0919 19:23:40.360269       1 shared_informer.go:262] Caches are synced for disruption
	I0919 19:23:40.360313       1 disruption.go:371] Sending events to api server.
	I0919 19:23:40.360273       1 shared_informer.go:262] Caches are synced for deployment
	I0919 19:23:40.362446       1 shared_informer.go:262] Caches are synced for resource quota
	I0919 19:23:40.777421       1 shared_informer.go:262] Caches are synced for garbage collector
	I0919 19:23:40.816482       1 shared_informer.go:262] Caches are synced for garbage collector
	I0919 19:23:40.816494       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0919 19:23:40.965841       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-c4dmx"
	I0919 19:23:41.065867       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0919 19:23:41.164593       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-g8phh"
	I0919 19:23:41.171641       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-rm4p9"
	
	
	==> kube-proxy [96d083c691b9] <==
	I0919 19:23:41.452675       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0919 19:23:41.452708       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0919 19:23:41.452791       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0919 19:23:41.464325       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0919 19:23:41.464339       1 server_others.go:206] "Using iptables Proxier"
	I0919 19:23:41.464354       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0919 19:23:41.464459       1 server.go:661] "Version info" version="v1.24.1"
	I0919 19:23:41.464468       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:23:41.464948       1 config.go:317] "Starting service config controller"
	I0919 19:23:41.464952       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0919 19:23:41.464959       1 config.go:226] "Starting endpoint slice config controller"
	I0919 19:23:41.464961       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0919 19:23:41.465720       1 config.go:444] "Starting node config controller"
	I0919 19:23:41.465747       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0919 19:23:41.565927       1 shared_informer.go:262] Caches are synced for node config
	I0919 19:23:41.565942       1 shared_informer.go:262] Caches are synced for service config
	I0919 19:23:41.565952       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4788575dac29] <==
	W0919 19:23:25.226511       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 19:23:25.226514       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0919 19:23:25.226525       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 19:23:25.226528       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0919 19:23:25.226550       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 19:23:25.226559       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0919 19:23:25.226572       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 19:23:25.226574       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0919 19:23:25.226594       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 19:23:25.226598       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 19:23:25.226611       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 19:23:25.226618       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0919 19:23:25.226792       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 19:23:25.226819       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 19:23:26.041998       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 19:23:26.042090       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0919 19:23:26.092002       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 19:23:26.092053       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 19:23:26.108002       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 19:23:26.108062       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 19:23:26.168256       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 19:23:26.168365       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 19:23:26.226610       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 19:23:26.226642       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0919 19:23:28.822660       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-09-19 19:18:42 UTC, ends at Thu 2024-09-19 19:27:45 UTC. --
	Sep 19 19:23:29 running-upgrade-356000 kubelet[12145]: E0919 19:23:29.812560   12145 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-356000\" already exists" pod="kube-system/etcd-running-upgrade-356000"
	Sep 19 19:23:30 running-upgrade-356000 kubelet[12145]: I0919 19:23:30.002572   12145 request.go:601] Waited for 1.110402536s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 19 19:23:30 running-upgrade-356000 kubelet[12145]: E0919 19:23:30.005717   12145 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-356000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-356000"
	Sep 19 19:23:40 running-upgrade-356000 kubelet[12145]: I0919 19:23:40.233102   12145 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 19:23:40 running-upgrade-356000 kubelet[12145]: I0919 19:23:40.233440   12145 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 19:23:40 running-upgrade-356000 kubelet[12145]: I0919 19:23:40.265664   12145 topology_manager.go:200] "Topology Admit Handler"
	Sep 19 19:23:40 running-upgrade-356000 kubelet[12145]: I0919 19:23:40.435255   12145 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xddjd\" (UniqueName: \"kubernetes.io/projected/93d74c63-4aea-4cf1-84ab-214ee12752f3-kube-api-access-xddjd\") pod \"storage-provisioner\" (UID: \"93d74c63-4aea-4cf1-84ab-214ee12752f3\") " pod="kube-system/storage-provisioner"
	Sep 19 19:23:40 running-upgrade-356000 kubelet[12145]: I0919 19:23:40.435287   12145 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/93d74c63-4aea-4cf1-84ab-214ee12752f3-tmp\") pod \"storage-provisioner\" (UID: \"93d74c63-4aea-4cf1-84ab-214ee12752f3\") " pod="kube-system/storage-provisioner"
	Sep 19 19:23:40 running-upgrade-356000 kubelet[12145]: E0919 19:23:40.541346   12145 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 19 19:23:40 running-upgrade-356000 kubelet[12145]: E0919 19:23:40.541367   12145 projected.go:192] Error preparing data for projected volume kube-api-access-xddjd for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 19 19:23:40 running-upgrade-356000 kubelet[12145]: E0919 19:23:40.541404   12145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/93d74c63-4aea-4cf1-84ab-214ee12752f3-kube-api-access-xddjd podName:93d74c63-4aea-4cf1-84ab-214ee12752f3 nodeName:}" failed. No retries permitted until 2024-09-19 19:23:41.041390251 +0000 UTC m=+13.277461074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xddjd" (UniqueName: "kubernetes.io/projected/93d74c63-4aea-4cf1-84ab-214ee12752f3-kube-api-access-xddjd") pod "storage-provisioner" (UID: "93d74c63-4aea-4cf1-84ab-214ee12752f3") : configmap "kube-root-ca.crt" not found
	Sep 19 19:23:40 running-upgrade-356000 kubelet[12145]: I0919 19:23:40.972765   12145 topology_manager.go:200] "Topology Admit Handler"
	Sep 19 19:23:41 running-upgrade-356000 kubelet[12145]: I0919 19:23:41.147715   12145 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a3650200-3456-4219-89ef-038817edafbe-kube-proxy\") pod \"kube-proxy-c4dmx\" (UID: \"a3650200-3456-4219-89ef-038817edafbe\") " pod="kube-system/kube-proxy-c4dmx"
	Sep 19 19:23:41 running-upgrade-356000 kubelet[12145]: I0919 19:23:41.147745   12145 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3650200-3456-4219-89ef-038817edafbe-xtables-lock\") pod \"kube-proxy-c4dmx\" (UID: \"a3650200-3456-4219-89ef-038817edafbe\") " pod="kube-system/kube-proxy-c4dmx"
	Sep 19 19:23:41 running-upgrade-356000 kubelet[12145]: I0919 19:23:41.147756   12145 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3650200-3456-4219-89ef-038817edafbe-lib-modules\") pod \"kube-proxy-c4dmx\" (UID: \"a3650200-3456-4219-89ef-038817edafbe\") " pod="kube-system/kube-proxy-c4dmx"
	Sep 19 19:23:41 running-upgrade-356000 kubelet[12145]: I0919 19:23:41.147767   12145 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w2tp\" (UniqueName: \"kubernetes.io/projected/a3650200-3456-4219-89ef-038817edafbe-kube-api-access-4w2tp\") pod \"kube-proxy-c4dmx\" (UID: \"a3650200-3456-4219-89ef-038817edafbe\") " pod="kube-system/kube-proxy-c4dmx"
	Sep 19 19:23:41 running-upgrade-356000 kubelet[12145]: I0919 19:23:41.168710   12145 topology_manager.go:200] "Topology Admit Handler"
	Sep 19 19:23:41 running-upgrade-356000 kubelet[12145]: I0919 19:23:41.176005   12145 topology_manager.go:200] "Topology Admit Handler"
	Sep 19 19:23:41 running-upgrade-356000 kubelet[12145]: I0919 19:23:41.248344   12145 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmmm9\" (UniqueName: \"kubernetes.io/projected/462e33a0-2f15-41d5-808a-9009d49e6ebb-kube-api-access-zmmm9\") pod \"coredns-6d4b75cb6d-rm4p9\" (UID: \"462e33a0-2f15-41d5-808a-9009d49e6ebb\") " pod="kube-system/coredns-6d4b75cb6d-rm4p9"
	Sep 19 19:23:41 running-upgrade-356000 kubelet[12145]: I0919 19:23:41.248374   12145 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdb1a916-3a08-40d6-b587-27d35187d4d7-config-volume\") pod \"coredns-6d4b75cb6d-g8phh\" (UID: \"fdb1a916-3a08-40d6-b587-27d35187d4d7\") " pod="kube-system/coredns-6d4b75cb6d-g8phh"
	Sep 19 19:23:41 running-upgrade-356000 kubelet[12145]: I0919 19:23:41.248396   12145 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/462e33a0-2f15-41d5-808a-9009d49e6ebb-config-volume\") pod \"coredns-6d4b75cb6d-rm4p9\" (UID: \"462e33a0-2f15-41d5-808a-9009d49e6ebb\") " pod="kube-system/coredns-6d4b75cb6d-rm4p9"
	Sep 19 19:23:41 running-upgrade-356000 kubelet[12145]: I0919 19:23:41.248411   12145 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk82v\" (UniqueName: \"kubernetes.io/projected/fdb1a916-3a08-40d6-b587-27d35187d4d7-kube-api-access-xk82v\") pod \"coredns-6d4b75cb6d-g8phh\" (UID: \"fdb1a916-3a08-40d6-b587-27d35187d4d7\") " pod="kube-system/coredns-6d4b75cb6d-g8phh"
	Sep 19 19:23:41 running-upgrade-356000 kubelet[12145]: I0919 19:23:41.953684   12145 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="5b8290e034b5392beba48931821acfb21012137c0a21a57fbf990e51231b269b"
	Sep 19 19:27:29 running-upgrade-356000 kubelet[12145]: I0919 19:27:29.369500   12145 scope.go:110] "RemoveContainer" containerID="62f159c9951788b8ab3fb68bb4c5b1a4696ff2d7fb9406fe8162d3eb3bdf652e"
	Sep 19 19:27:29 running-upgrade-356000 kubelet[12145]: I0919 19:27:29.384627   12145 scope.go:110] "RemoveContainer" containerID="201ff29b5789c8843aa6511b077874359a17307b124f205a6474950cce831994"
	
	
	==> storage-provisioner [98cf853f876a] <==
	I0919 19:23:41.432075       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 19:23:41.437214       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 19:23:41.437233       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 19:23:41.444926       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 19:23:41.445235       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a309d886-0f36-437f-9a1c-4430471a9c37", APIVersion:"v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-356000_e14737f3-2598-4ac0-a852-fc1252668256 became leader
	I0919 19:23:41.445249       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-356000_e14737f3-2598-4ac0-a852-fc1252668256!
	I0919 19:23:41.546321       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-356000_e14737f3-2598-4ac0-a852-fc1252668256!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-356000 -n running-upgrade-356000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-356000 -n running-upgrade-356000: exit status 2 (15.7345945s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-356000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-356000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-356000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-356000: (1.182664375s)
--- FAIL: TestRunningBinaryUpgrade (590.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-186000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-186000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.006271416s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-186000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-186000" primary control-plane node in "kubernetes-upgrade-186000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-186000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:21:11.861814    4702 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:21:11.861972    4702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:21:11.861976    4702 out.go:358] Setting ErrFile to fd 2...
	I0919 12:21:11.861978    4702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:21:11.862121    4702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:21:11.863246    4702 out.go:352] Setting JSON to false
	I0919 12:21:11.880163    4702 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3036,"bootTime":1726770635,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:21:11.880252    4702 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:21:11.886366    4702 out.go:177] * [kubernetes-upgrade-186000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:21:11.894276    4702 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:21:11.894336    4702 notify.go:220] Checking for updates...
	I0919 12:21:11.901191    4702 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:21:11.904168    4702 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:21:11.907168    4702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:21:11.910141    4702 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:21:11.913203    4702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:21:11.916555    4702 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:21:11.916616    4702 config.go:182] Loaded profile config "running-upgrade-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:21:11.916662    4702 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:21:11.921184    4702 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:21:11.928158    4702 start.go:297] selected driver: qemu2
	I0919 12:21:11.928165    4702 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:21:11.928171    4702 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:21:11.930344    4702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:21:11.933169    4702 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:21:11.936304    4702 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 12:21:11.936316    4702 cni.go:84] Creating CNI manager for ""
	I0919 12:21:11.936334    4702 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 12:21:11.936363    4702 start.go:340] cluster config:
	{Name:kubernetes-upgrade-186000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-186000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:21:11.939724    4702 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:21:11.947177    4702 out.go:177] * Starting "kubernetes-upgrade-186000" primary control-plane node in "kubernetes-upgrade-186000" cluster
	I0919 12:21:11.951217    4702 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0919 12:21:11.951232    4702 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0919 12:21:11.951241    4702 cache.go:56] Caching tarball of preloaded images
	I0919 12:21:11.951327    4702 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:21:11.951332    4702 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0919 12:21:11.951389    4702 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/kubernetes-upgrade-186000/config.json ...
	I0919 12:21:11.951399    4702 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/kubernetes-upgrade-186000/config.json: {Name:mke0daa363e3f2c98416f4dd4ddb2b9410185739 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:21:11.951670    4702 start.go:360] acquireMachinesLock for kubernetes-upgrade-186000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:21:11.951705    4702 start.go:364] duration metric: took 26.209µs to acquireMachinesLock for "kubernetes-upgrade-186000"
	I0919 12:21:11.951714    4702 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-186000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-186000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:21:11.951736    4702 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:21:11.962181    4702 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:21:11.977282    4702 start.go:159] libmachine.API.Create for "kubernetes-upgrade-186000" (driver="qemu2")
	I0919 12:21:11.977317    4702 client.go:168] LocalClient.Create starting
	I0919 12:21:11.977374    4702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:21:11.977406    4702 main.go:141] libmachine: Decoding PEM data...
	I0919 12:21:11.977415    4702 main.go:141] libmachine: Parsing certificate...
	I0919 12:21:11.977457    4702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:21:11.977480    4702 main.go:141] libmachine: Decoding PEM data...
	I0919 12:21:11.977487    4702 main.go:141] libmachine: Parsing certificate...
	I0919 12:21:11.977860    4702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:21:12.146028    4702 main.go:141] libmachine: Creating SSH key...
	I0919 12:21:12.257787    4702 main.go:141] libmachine: Creating Disk image...
	I0919 12:21:12.257799    4702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:21:12.258002    4702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2
	I0919 12:21:12.267353    4702 main.go:141] libmachine: STDOUT: 
	I0919 12:21:12.267367    4702 main.go:141] libmachine: STDERR: 
	I0919 12:21:12.267438    4702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2 +20000M
	I0919 12:21:12.275368    4702 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:21:12.275381    4702 main.go:141] libmachine: STDERR: 
	I0919 12:21:12.275405    4702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2
	I0919 12:21:12.275411    4702 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:21:12.275424    4702 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:21:12.275451    4702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:54:c4:4f:8c:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2
	I0919 12:21:12.277049    4702 main.go:141] libmachine: STDOUT: 
	I0919 12:21:12.277063    4702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:21:12.277085    4702 client.go:171] duration metric: took 299.767417ms to LocalClient.Create
	I0919 12:21:14.279243    4702 start.go:128] duration metric: took 2.327539125s to createHost
	I0919 12:21:14.279386    4702 start.go:83] releasing machines lock for "kubernetes-upgrade-186000", held for 2.327704333s
	W0919 12:21:14.279441    4702 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:21:14.286587    4702 out.go:177] * Deleting "kubernetes-upgrade-186000" in qemu2 ...
	W0919 12:21:14.318213    4702 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:21:14.318241    4702 start.go:729] Will try again in 5 seconds ...
	I0919 12:21:19.320420    4702 start.go:360] acquireMachinesLock for kubernetes-upgrade-186000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:21:19.321025    4702 start.go:364] duration metric: took 480.666µs to acquireMachinesLock for "kubernetes-upgrade-186000"
	I0919 12:21:19.321187    4702 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-186000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-186000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:21:19.321497    4702 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:21:19.328227    4702 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:21:19.377616    4702 start.go:159] libmachine.API.Create for "kubernetes-upgrade-186000" (driver="qemu2")
	I0919 12:21:19.377669    4702 client.go:168] LocalClient.Create starting
	I0919 12:21:19.377806    4702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:21:19.377895    4702 main.go:141] libmachine: Decoding PEM data...
	I0919 12:21:19.377911    4702 main.go:141] libmachine: Parsing certificate...
	I0919 12:21:19.377982    4702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:21:19.378028    4702 main.go:141] libmachine: Decoding PEM data...
	I0919 12:21:19.378041    4702 main.go:141] libmachine: Parsing certificate...
	I0919 12:21:19.378604    4702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:21:19.553153    4702 main.go:141] libmachine: Creating SSH key...
	I0919 12:21:19.774806    4702 main.go:141] libmachine: Creating Disk image...
	I0919 12:21:19.774819    4702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:21:19.775063    4702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2
	I0919 12:21:19.785041    4702 main.go:141] libmachine: STDOUT: 
	I0919 12:21:19.785058    4702 main.go:141] libmachine: STDERR: 
	I0919 12:21:19.785125    4702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2 +20000M
	I0919 12:21:19.793215    4702 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:21:19.793230    4702 main.go:141] libmachine: STDERR: 
	I0919 12:21:19.793243    4702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2
	I0919 12:21:19.793248    4702 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:21:19.793256    4702 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:21:19.793287    4702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:27:02:8d:5a:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2
	I0919 12:21:19.794981    4702 main.go:141] libmachine: STDOUT: 
	I0919 12:21:19.794997    4702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:21:19.795009    4702 client.go:171] duration metric: took 417.345042ms to LocalClient.Create
	I0919 12:21:21.796391    4702 start.go:128] duration metric: took 2.474922375s to createHost
	I0919 12:21:21.796435    4702 start.go:83] releasing machines lock for "kubernetes-upgrade-186000", held for 2.475455584s
	W0919 12:21:21.796554    4702 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-186000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-186000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:21:21.805769    4702 out.go:201] 
	W0919 12:21:21.809778    4702 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:21:21.809796    4702 out.go:270] * 
	* 
	W0919 12:21:21.810376    4702 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:21:21.830271    4702 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-186000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-186000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-186000: (3.170299584s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-186000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-186000 status --format={{.Host}}: exit status 7 (63.912667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-186000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-186000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.189983875s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-186000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-186000" primary control-plane node in "kubernetes-upgrade-186000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-186000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-186000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:21:25.105894    4743 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:21:25.106017    4743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:21:25.106020    4743 out.go:358] Setting ErrFile to fd 2...
	I0919 12:21:25.106022    4743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:21:25.106153    4743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:21:25.107141    4743 out.go:352] Setting JSON to false
	I0919 12:21:25.123177    4743 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3050,"bootTime":1726770635,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:21:25.123239    4743 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:21:25.128280    4743 out.go:177] * [kubernetes-upgrade-186000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:21:25.136574    4743 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:21:25.136629    4743 notify.go:220] Checking for updates...
	I0919 12:21:25.144257    4743 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:21:25.147185    4743 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:21:25.150233    4743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:21:25.153288    4743 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:21:25.156206    4743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:21:25.159488    4743 config.go:182] Loaded profile config "kubernetes-upgrade-186000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0919 12:21:25.159778    4743 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:21:25.164268    4743 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:21:25.171239    4743 start.go:297] selected driver: qemu2
	I0919 12:21:25.171247    4743 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-186000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-186000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:21:25.171321    4743 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:21:25.173581    4743 cni.go:84] Creating CNI manager for ""
	I0919 12:21:25.173617    4743 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:21:25.173649    4743 start.go:340] cluster config:
	{Name:kubernetes-upgrade-186000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-186000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:21:25.177043    4743 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:21:25.184184    4743 out.go:177] * Starting "kubernetes-upgrade-186000" primary control-plane node in "kubernetes-upgrade-186000" cluster
	I0919 12:21:25.188252    4743 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:21:25.188267    4743 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:21:25.188275    4743 cache.go:56] Caching tarball of preloaded images
	I0919 12:21:25.188328    4743 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:21:25.188333    4743 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:21:25.188386    4743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/kubernetes-upgrade-186000/config.json ...
	I0919 12:21:25.188921    4743 start.go:360] acquireMachinesLock for kubernetes-upgrade-186000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:21:25.188950    4743 start.go:364] duration metric: took 22.291µs to acquireMachinesLock for "kubernetes-upgrade-186000"
	I0919 12:21:25.188958    4743 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:21:25.188965    4743 fix.go:54] fixHost starting: 
	I0919 12:21:25.189080    4743 fix.go:112] recreateIfNeeded on kubernetes-upgrade-186000: state=Stopped err=<nil>
	W0919 12:21:25.189090    4743 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:21:25.197250    4743 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-186000" ...
	I0919 12:21:25.201279    4743 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:21:25.201314    4743 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:27:02:8d:5a:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2
	I0919 12:21:25.203203    4743 main.go:141] libmachine: STDOUT: 
	I0919 12:21:25.203222    4743 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:21:25.203249    4743 fix.go:56] duration metric: took 14.284375ms for fixHost
	I0919 12:21:25.203255    4743 start.go:83] releasing machines lock for "kubernetes-upgrade-186000", held for 14.300625ms
	W0919 12:21:25.203260    4743 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:21:25.203305    4743 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:21:25.203310    4743 start.go:729] Will try again in 5 seconds ...
	I0919 12:21:30.205429    4743 start.go:360] acquireMachinesLock for kubernetes-upgrade-186000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:21:30.206036    4743 start.go:364] duration metric: took 466.708µs to acquireMachinesLock for "kubernetes-upgrade-186000"
	I0919 12:21:30.206137    4743 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:21:30.206158    4743 fix.go:54] fixHost starting: 
	I0919 12:21:30.206929    4743 fix.go:112] recreateIfNeeded on kubernetes-upgrade-186000: state=Stopped err=<nil>
	W0919 12:21:30.206957    4743 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:21:30.211531    4743 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-186000" ...
	I0919 12:21:30.218407    4743 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:21:30.218686    4743 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:27:02:8d:5a:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubernetes-upgrade-186000/disk.qcow2
	I0919 12:21:30.228626    4743 main.go:141] libmachine: STDOUT: 
	I0919 12:21:30.228706    4743 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:21:30.228822    4743 fix.go:56] duration metric: took 22.6615ms for fixHost
	I0919 12:21:30.228851    4743 start.go:83] releasing machines lock for "kubernetes-upgrade-186000", held for 22.789833ms
	W0919 12:21:30.229034    4743 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-186000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-186000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:21:30.236341    4743 out.go:201] 
	W0919 12:21:30.240623    4743 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:21:30.240659    4743 out.go:270] * 
	* 
	W0919 12:21:30.243775    4743 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:21:30.251493    4743 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-186000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-186000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-186000 version --output=json: exit status 1 (64.539584ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-186000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-19 12:21:30.33169 -0700 PDT m=+2596.859363043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-186000 -n kubernetes-upgrade-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-186000 -n kubernetes-upgrade-186000: exit status 7 (33.650291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-186000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-186000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-186000
--- FAIL: TestKubernetesUpgrade (18.61s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.39s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19664
- KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3013877814/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.39s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.21s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19664
- KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2557383249/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1905618105 start -p stopped-upgrade-269000 --memory=2200 --vm-driver=qemu2 
E0919 12:21:47.794625    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:21:56.049414    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1905618105 start -p stopped-upgrade-269000 --memory=2200 --vm-driver=qemu2 : (40.803246916s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1905618105 -p stopped-upgrade-269000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1905618105 -p stopped-upgrade-269000 stop: (12.116273417s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-269000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0919 12:26:47.769419    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:26:56.024644    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-269000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.565046709s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-269000" primary control-plane node in "stopped-upgrade-269000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-269000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:22:24.566595    4788 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:22:24.566758    4788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:22:24.566764    4788 out.go:358] Setting ErrFile to fd 2...
	I0919 12:22:24.566767    4788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:22:24.566914    4788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:22:24.568094    4788 out.go:352] Setting JSON to false
	I0919 12:22:24.588100    4788 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3109,"bootTime":1726770635,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:22:24.588178    4788 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:22:24.593475    4788 out.go:177] * [stopped-upgrade-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:22:24.601547    4788 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:22:24.601660    4788 notify.go:220] Checking for updates...
	I0919 12:22:24.607462    4788 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:22:24.610430    4788 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:22:24.613477    4788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:22:24.614581    4788 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:22:24.617503    4788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:22:24.620774    4788 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:22:24.624492    4788 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0919 12:22:24.627479    4788 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:22:24.631466    4788 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:22:24.638481    4788 start.go:297] selected driver: qemu2
	I0919 12:22:24.638490    4788 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0919 12:22:24.638547    4788 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:22:24.641566    4788 cni.go:84] Creating CNI manager for ""
	I0919 12:22:24.641601    4788 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:22:24.641620    4788 start.go:340] cluster config:
	{Name:stopped-upgrade-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0919 12:22:24.641686    4788 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:22:24.648472    4788 out.go:177] * Starting "stopped-upgrade-269000" primary control-plane node in "stopped-upgrade-269000" cluster
	I0919 12:22:24.652454    4788 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0919 12:22:24.652487    4788 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0919 12:22:24.652493    4788 cache.go:56] Caching tarball of preloaded images
	I0919 12:22:24.652574    4788 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:22:24.652581    4788 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0919 12:22:24.652639    4788 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/config.json ...
	I0919 12:22:24.653031    4788 start.go:360] acquireMachinesLock for stopped-upgrade-269000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:22:24.653062    4788 start.go:364] duration metric: took 22.542µs to acquireMachinesLock for "stopped-upgrade-269000"
	I0919 12:22:24.653071    4788 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:22:24.653079    4788 fix.go:54] fixHost starting: 
	I0919 12:22:24.653206    4788 fix.go:112] recreateIfNeeded on stopped-upgrade-269000: state=Stopped err=<nil>
	W0919 12:22:24.653215    4788 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:22:24.661432    4788 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-269000" ...
	I0919 12:22:24.665447    4788 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:22:24.665550    4788 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50504-:22,hostfwd=tcp::50505-:2376,hostname=stopped-upgrade-269000 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/disk.qcow2
	I0919 12:22:24.712880    4788 main.go:141] libmachine: STDOUT: 
	I0919 12:22:24.712924    4788 main.go:141] libmachine: STDERR: 
	I0919 12:22:24.712932    4788 main.go:141] libmachine: Waiting for VM to start (ssh -p 50504 docker@127.0.0.1)...
	I0919 12:22:44.319803    4788 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/config.json ...
	I0919 12:22:44.320374    4788 machine.go:93] provisionDockerMachine start ...
	I0919 12:22:44.320493    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:44.320786    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:44.320802    4788 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 12:22:44.386927    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0919 12:22:44.386947    4788 buildroot.go:166] provisioning hostname "stopped-upgrade-269000"
	I0919 12:22:44.387008    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:44.387127    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:44.387134    4788 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-269000 && echo "stopped-upgrade-269000" | sudo tee /etc/hostname
	I0919 12:22:44.446336    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-269000
	
	I0919 12:22:44.446398    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:44.446508    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:44.446516    4788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-269000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-269000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-269000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 12:22:44.506800    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 12:22:44.506814    4788 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19664-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19664-1099/.minikube}
	I0919 12:22:44.506823    4788 buildroot.go:174] setting up certificates
	I0919 12:22:44.506827    4788 provision.go:84] configureAuth start
	I0919 12:22:44.506838    4788 provision.go:143] copyHostCerts
	I0919 12:22:44.506924    4788 exec_runner.go:144] found /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.pem, removing ...
	I0919 12:22:44.506931    4788 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.pem
	I0919 12:22:44.507099    4788 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.pem (1078 bytes)
	I0919 12:22:44.507274    4788 exec_runner.go:144] found /Users/jenkins/minikube-integration/19664-1099/.minikube/cert.pem, removing ...
	I0919 12:22:44.507277    4788 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19664-1099/.minikube/cert.pem
	I0919 12:22:44.507329    4788 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19664-1099/.minikube/cert.pem (1123 bytes)
	I0919 12:22:44.507432    4788 exec_runner.go:144] found /Users/jenkins/minikube-integration/19664-1099/.minikube/key.pem, removing ...
	I0919 12:22:44.507435    4788 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19664-1099/.minikube/key.pem
	I0919 12:22:44.507472    4788 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19664-1099/.minikube/key.pem (1679 bytes)
	I0919 12:22:44.507567    4788 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-269000 san=[127.0.0.1 localhost minikube stopped-upgrade-269000]
	I0919 12:22:44.599769    4788 provision.go:177] copyRemoteCerts
	I0919 12:22:44.599820    4788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 12:22:44.599827    4788 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/id_rsa Username:docker}
	I0919 12:22:44.631068    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 12:22:44.637776    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0919 12:22:44.644792    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 12:22:44.651862    4788 provision.go:87] duration metric: took 145.027041ms to configureAuth
	I0919 12:22:44.651872    4788 buildroot.go:189] setting minikube options for container-runtime
	I0919 12:22:44.651987    4788 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:22:44.652035    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:44.652127    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:44.652131    4788 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 12:22:44.708464    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 12:22:44.708472    4788 buildroot.go:70] root file system type: tmpfs
	I0919 12:22:44.708528    4788 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 12:22:44.708574    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:44.708680    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:44.708713    4788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 12:22:44.768708    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 12:22:44.768777    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:44.768885    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:44.768896    4788 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 12:22:45.132894    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0919 12:22:45.132914    4788 machine.go:96] duration metric: took 812.547833ms to provisionDockerMachine
	I0919 12:22:45.132922    4788 start.go:293] postStartSetup for "stopped-upgrade-269000" (driver="qemu2")
	I0919 12:22:45.132929    4788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 12:22:45.132987    4788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 12:22:45.132996    4788 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/id_rsa Username:docker}
	I0919 12:22:45.163204    4788 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 12:22:45.164444    4788 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 12:22:45.164451    4788 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19664-1099/.minikube/addons for local assets ...
	I0919 12:22:45.164522    4788 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19664-1099/.minikube/files for local assets ...
	I0919 12:22:45.164642    4788 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/ssl/certs/16182.pem -> 16182.pem in /etc/ssl/certs
	I0919 12:22:45.164744    4788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 12:22:45.167792    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/ssl/certs/16182.pem --> /etc/ssl/certs/16182.pem (1708 bytes)
	I0919 12:22:45.174493    4788 start.go:296] duration metric: took 41.564458ms for postStartSetup
	I0919 12:22:45.174508    4788 fix.go:56] duration metric: took 20.521994166s for fixHost
	I0919 12:22:45.174550    4788 main.go:141] libmachine: Using SSH client type: native
	I0919 12:22:45.174654    4788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a31190] 0x102a339d0 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0919 12:22:45.174661    4788 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 12:22:45.232183    4788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726773765.728006837
	
	I0919 12:22:45.232193    4788 fix.go:216] guest clock: 1726773765.728006837
	I0919 12:22:45.232197    4788 fix.go:229] Guest: 2024-09-19 12:22:45.728006837 -0700 PDT Remote: 2024-09-19 12:22:45.17451 -0700 PDT m=+20.635819501 (delta=553.496837ms)
	I0919 12:22:45.232210    4788 fix.go:200] guest clock delta is within tolerance: 553.496837ms
	I0919 12:22:45.232212    4788 start.go:83] releasing machines lock for "stopped-upgrade-269000", held for 20.579709s
	I0919 12:22:45.232291    4788 ssh_runner.go:195] Run: cat /version.json
	I0919 12:22:45.232294    4788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 12:22:45.232300    4788 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/id_rsa Username:docker}
	I0919 12:22:45.232312    4788 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/id_rsa Username:docker}
	W0919 12:22:45.232875    4788 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50504: connect: connection refused
	I0919 12:22:45.232900    4788 retry.go:31] will retry after 325.814447ms: dial tcp [::1]:50504: connect: connection refused
	W0919 12:22:45.617002    4788 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0919 12:22:45.617116    4788 ssh_runner.go:195] Run: systemctl --version
	I0919 12:22:45.620547    4788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 12:22:45.623381    4788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 12:22:45.623426    4788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0919 12:22:45.628277    4788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0919 12:22:45.639113    4788 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 12:22:45.639126    4788 start.go:495] detecting cgroup driver to use...
	I0919 12:22:45.639219    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 12:22:45.648756    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0919 12:22:45.652306    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 12:22:45.657354    4788 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 12:22:45.657426    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 12:22:45.661626    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 12:22:45.666069    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 12:22:45.669778    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 12:22:45.673044    4788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 12:22:45.676750    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 12:22:45.680265    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 12:22:45.683290    4788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 12:22:45.686080    4788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 12:22:45.688762    4788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 12:22:45.691874    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:22:45.762286    4788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 12:22:45.768318    4788 start.go:495] detecting cgroup driver to use...
	I0919 12:22:45.768395    4788 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 12:22:45.776392    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 12:22:45.781162    4788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 12:22:45.788073    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 12:22:45.792208    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 12:22:45.797108    4788 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 12:22:45.844782    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 12:22:45.849981    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 12:22:45.855247    4788 ssh_runner.go:195] Run: which cri-dockerd
	I0919 12:22:45.856613    4788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 12:22:45.859648    4788 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 12:22:45.864646    4788 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 12:22:45.944421    4788 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 12:22:46.026182    4788 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 12:22:46.026238    4788 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0919 12:22:46.031237    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:22:46.108000    4788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 12:22:47.262057    4788 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154071875s)
	I0919 12:22:47.262131    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 12:22:47.266392    4788 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 12:22:47.272620    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 12:22:47.277318    4788 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 12:22:47.359740    4788 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 12:22:47.435010    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:22:47.512888    4788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 12:22:47.519026    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 12:22:47.523244    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:22:47.588222    4788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 12:22:47.628667    4788 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 12:22:47.628767    4788 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 12:22:47.631766    4788 start.go:563] Will wait 60s for crictl version
	I0919 12:22:47.631829    4788 ssh_runner.go:195] Run: which crictl
	I0919 12:22:47.633147    4788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 12:22:47.647799    4788 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0919 12:22:47.647881    4788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 12:22:47.664128    4788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 12:22:47.684231    4788 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0919 12:22:47.684310    4788 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0919 12:22:47.685611    4788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 12:22:47.689313    4788 kubeadm.go:883] updating cluster {Name:stopped-upgrade-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0919 12:22:47.689356    4788 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0919 12:22:47.689406    4788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 12:22:47.699677    4788 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 12:22:47.699686    4788 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0919 12:22:47.699744    4788 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 12:22:47.702730    4788 ssh_runner.go:195] Run: which lz4
	I0919 12:22:47.703979    4788 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 12:22:47.705221    4788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 12:22:47.705233    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0919 12:22:48.642081    4788 docker.go:649] duration metric: took 938.158958ms to copy over tarball
	I0919 12:22:48.642152    4788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 12:22:49.789434    4788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.147299417s)
	I0919 12:22:49.789449    4788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 12:22:49.805230    4788 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 12:22:49.808635    4788 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0919 12:22:49.813843    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:22:49.893079    4788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 12:22:51.658463    4788 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.765416041s)
	I0919 12:22:51.658580    4788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 12:22:51.669772    4788 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 12:22:51.669788    4788 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0919 12:22:51.669793    4788 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 12:22:51.674085    4788 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:22:51.675908    4788 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:22:51.678123    4788 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:22:51.678407    4788 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:22:51.680460    4788 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:22:51.680529    4788 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:22:51.682301    4788 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:22:51.682298    4788 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:22:51.683836    4788 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0919 12:22:51.683912    4788 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:22:51.685069    4788 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0919 12:22:51.686175    4788 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:22:51.686439    4788 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0919 12:22:51.686844    4788 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:22:51.687185    4788 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0919 12:22:51.688910    4788 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:22:52.077487    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:22:52.079468    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:22:52.093460    4788 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0919 12:22:52.093489    4788 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:22:52.093563    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0919 12:22:52.101516    4788 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0919 12:22:52.101539    4788 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:22:52.101621    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0919 12:22:52.107617    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0919 12:22:52.112336    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0919 12:22:52.119154    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:22:52.122667    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:22:52.126353    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0919 12:22:52.134027    4788 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0919 12:22:52.134048    4788 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0919 12:22:52.134054    4788 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:22:52.134059    4788 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:22:52.134122    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0919 12:22:52.134122    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0919 12:22:52.143681    4788 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0919 12:22:52.143705    4788 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0919 12:22:52.143773    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0919 12:22:52.150225    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0919 12:22:52.151890    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0919 12:22:52.155493    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0919 12:22:52.166401    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0919 12:22:52.166413    4788 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0919 12:22:52.166431    4788 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0919 12:22:52.166497    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0919 12:22:52.176189    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0919 12:22:52.176313    4788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0919 12:22:52.177858    4788 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0919 12:22:52.177870    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0919 12:22:52.179312    4788 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0919 12:22:52.179428    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:22:52.185615    4788 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0919 12:22:52.185627    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0919 12:22:52.195313    4788 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0919 12:22:52.195338    4788 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:22:52.195409    4788 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0919 12:22:52.227709    4788 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0919 12:22:52.227751    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0919 12:22:52.227879    4788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0919 12:22:52.229245    4788 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0919 12:22:52.229258    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0919 12:22:52.269364    4788 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0919 12:22:52.269379    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0919 12:22:52.312630    4788 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0919 12:22:52.493286    4788 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0919 12:22:52.493481    4788 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:22:52.507866    4788 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0919 12:22:52.507901    4788 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:22:52.507983    4788 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:22:52.523672    4788 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0919 12:22:52.523806    4788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0919 12:22:52.525248    4788 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0919 12:22:52.525260    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0919 12:22:52.556434    4788 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0919 12:22:52.556450    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0919 12:22:52.794550    4788 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0919 12:22:52.794587    4788 cache_images.go:92] duration metric: took 1.124819709s to LoadCachedImages
	W0919 12:22:52.794628    4788 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0919 12:22:52.794634    4788 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0919 12:22:52.794689    4788 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-269000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 12:22:52.794758    4788 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 12:22:52.807781    4788 cni.go:84] Creating CNI manager for ""
	I0919 12:22:52.807793    4788 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:22:52.807805    4788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 12:22:52.807813    4788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-269000 NodeName:stopped-upgrade-269000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 12:22:52.807881    4788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-269000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 12:22:52.807955    4788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0919 12:22:52.811007    4788 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 12:22:52.811043    4788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 12:22:52.813533    4788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0919 12:22:52.818490    4788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 12:22:52.823272    4788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0919 12:22:52.828385    4788 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0919 12:22:52.829573    4788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 12:22:52.833265    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:22:52.919376    4788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 12:22:52.926397    4788 certs.go:68] Setting up /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000 for IP: 10.0.2.15
	I0919 12:22:52.926407    4788 certs.go:194] generating shared ca certs ...
	I0919 12:22:52.926416    4788 certs.go:226] acquiring lock for ca certs: {Name:mk207a98b59455406f5fa19947ac5c81f6753b77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:22:52.926565    4788 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.key
	I0919 12:22:52.926603    4788 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.key
	I0919 12:22:52.926613    4788 certs.go:256] generating profile certs ...
	I0919 12:22:52.926673    4788 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/client.key
	I0919 12:22:52.926696    4788 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.key.d4ae76be
	I0919 12:22:52.926709    4788 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.crt.d4ae76be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0919 12:22:53.064991    4788 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.crt.d4ae76be ...
	I0919 12:22:53.065008    4788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.crt.d4ae76be: {Name:mk4ebd5ae5db10b2597167055ceae25473bd7724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:22:53.065963    4788 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.key.d4ae76be ...
	I0919 12:22:53.065970    4788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.key.d4ae76be: {Name:mkf32725161b788bb445ec4c580490c2d7786db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:22:53.066137    4788 certs.go:381] copying /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.crt.d4ae76be -> /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.crt
	I0919 12:22:53.066293    4788 certs.go:385] copying /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.key.d4ae76be -> /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.key
	I0919 12:22:53.066433    4788 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/proxy-client.key
	I0919 12:22:53.066566    4788 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/1618.pem (1338 bytes)
	W0919 12:22:53.066588    4788 certs.go:480] ignoring /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/1618_empty.pem, impossibly tiny 0 bytes
	I0919 12:22:53.066593    4788 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 12:22:53.066612    4788 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem (1078 bytes)
	I0919 12:22:53.066630    4788 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem (1123 bytes)
	I0919 12:22:53.066648    4788 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/key.pem (1679 bytes)
	I0919 12:22:53.066685    4788 certs.go:484] found cert: /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/ssl/certs/16182.pem (1708 bytes)
	I0919 12:22:53.067008    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 12:22:53.073834    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 12:22:53.080934    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 12:22:53.088221    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 12:22:53.095955    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 12:22:53.103146    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 12:22:53.109966    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 12:22:53.116794    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 12:22:53.124137    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/1618.pem --> /usr/share/ca-certificates/1618.pem (1338 bytes)
	I0919 12:22:53.131268    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/ssl/certs/16182.pem --> /usr/share/ca-certificates/16182.pem (1708 bytes)
	I0919 12:22:53.137647    4788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 12:22:53.144512    4788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 12:22:53.150829    4788 ssh_runner.go:195] Run: openssl version
	I0919 12:22:53.152615    4788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 12:22:53.155574    4788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 12:22:53.156917    4788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0919 12:22:53.156940    4788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 12:22:53.158779    4788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 12:22:53.161585    4788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1618.pem && ln -fs /usr/share/ca-certificates/1618.pem /etc/ssl/certs/1618.pem"
	I0919 12:22:53.164996    4788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1618.pem
	I0919 12:22:53.166391    4788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 18:54 /usr/share/ca-certificates/1618.pem
	I0919 12:22:53.166413    4788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1618.pem
	I0919 12:22:53.168068    4788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1618.pem /etc/ssl/certs/51391683.0"
	I0919 12:22:53.171133    4788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16182.pem && ln -fs /usr/share/ca-certificates/16182.pem /etc/ssl/certs/16182.pem"
	I0919 12:22:53.174027    4788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16182.pem
	I0919 12:22:53.175328    4788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 18:54 /usr/share/ca-certificates/16182.pem
	I0919 12:22:53.175347    4788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16182.pem
	I0919 12:22:53.177021    4788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16182.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 12:22:53.180536    4788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 12:22:53.181853    4788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 12:22:53.183654    4788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 12:22:53.185635    4788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 12:22:53.187447    4788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 12:22:53.189179    4788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 12:22:53.190989    4788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 12:22:53.192850    4788 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0919 12:22:53.192927    4788 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 12:22:53.204642    4788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 12:22:53.208171    4788 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 12:22:53.208185    4788 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0919 12:22:53.208215    4788 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 12:22:53.210929    4788 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 12:22:53.211238    4788 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-269000" does not appear in /Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:22:53.211338    4788 kubeconfig.go:62] /Users/jenkins/minikube-integration/19664-1099/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-269000" cluster setting kubeconfig missing "stopped-upgrade-269000" context setting]
	I0919 12:22:53.211539    4788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/kubeconfig: {Name:mk8a8f1f5779f30829ec51973ad05815f1640da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:22:53.212260    4788 kapi.go:59] client config for stopped-upgrade-269000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/client.key", CAFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104009800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 12:22:53.212598    4788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 12:22:53.215320    4788 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-269000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0919 12:22:53.215326    4788 kubeadm.go:1160] stopping kube-system containers ...
	I0919 12:22:53.215379    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 12:22:53.227514    4788 docker.go:483] Stopping containers: [6e24dc0306c2 219994403f67 a04ca8cc8c56 9ceebd9f5b94 69919762f36d c50b0db508a6 3d2544e1d664 7a4823763f68]
	I0919 12:22:53.227605    4788 ssh_runner.go:195] Run: docker stop 6e24dc0306c2 219994403f67 a04ca8cc8c56 9ceebd9f5b94 69919762f36d c50b0db508a6 3d2544e1d664 7a4823763f68
	I0919 12:22:53.238347    4788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 12:22:53.243931    4788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 12:22:53.246718    4788 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 12:22:53.246725    4788 kubeadm.go:157] found existing configuration files:
	
	I0919 12:22:53.246751    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf
	I0919 12:22:53.249247    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 12:22:53.249274    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 12:22:53.252257    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf
	I0919 12:22:53.254871    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 12:22:53.254894    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 12:22:53.257392    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf
	I0919 12:22:53.260346    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 12:22:53.260371    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 12:22:53.263202    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf
	I0919 12:22:53.265629    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 12:22:53.265653    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 12:22:53.268703    4788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 12:22:53.271887    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:22:53.296722    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:22:53.870759    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:22:54.000957    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:22:54.029186    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 12:22:54.053773    4788 api_server.go:52] waiting for apiserver process to appear ...
	I0919 12:22:54.053857    4788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:22:54.555276    4788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:22:55.055876    4788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:22:55.060046    4788 api_server.go:72] duration metric: took 1.006303125s to wait for apiserver process to appear ...
	I0919 12:22:55.060055    4788 api_server.go:88] waiting for apiserver healthz status ...
	I0919 12:22:55.060065    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:00.062047    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:00.062104    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:05.062256    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:05.062326    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:10.062741    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:10.062786    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:15.063354    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:15.063413    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:20.064176    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:20.064204    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:25.065202    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:25.065222    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:30.066477    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:30.066521    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:35.068184    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:35.068280    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:40.070710    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:40.070745    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:45.072837    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:45.072863    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:50.073926    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:50.074022    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:23:55.076099    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:23:55.076275    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:23:55.087367    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:23:55.087465    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:23:55.098077    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:23:55.098169    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:23:55.108471    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:23:55.108546    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:23:55.119975    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:23:55.120073    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:23:55.136110    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:23:55.136190    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:23:55.146976    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:23:55.147065    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:23:55.157181    4788 logs.go:276] 0 containers: []
	W0919 12:23:55.157191    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:23:55.157255    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:23:55.167739    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:23:55.167757    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:23:55.167763    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:23:55.208760    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:23:55.208772    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:23:55.220478    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:23:55.220491    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:23:55.232992    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:23:55.233002    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:23:55.245223    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:23:55.245236    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:23:55.271758    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:23:55.271770    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:23:55.356684    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:23:55.356696    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:23:55.371979    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:23:55.371991    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:23:55.390220    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:23:55.390232    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:23:55.402237    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:23:55.402249    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:23:55.414789    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:23:55.414799    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:23:55.419375    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:23:55.419382    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:23:55.433101    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:23:55.433113    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:23:55.444716    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:23:55.444726    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:23:55.458156    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:23:55.458166    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:23:55.471926    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:23:55.471936    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:23:55.488044    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:23:55.488055    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:23:58.028106    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:03.027300    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:03.027880    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:03.065905    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:03.066053    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:03.086711    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:03.086810    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:03.099430    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:03.099527    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:03.110868    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:03.110964    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:03.121726    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:03.121808    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:03.132372    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:03.132455    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:03.143743    4788 logs.go:276] 0 containers: []
	W0919 12:24:03.143754    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:03.143822    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:03.154720    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:03.154736    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:03.154742    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:03.159098    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:03.159104    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:03.173365    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:03.173378    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:03.192914    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:03.192926    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:03.218560    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:03.218570    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:03.256811    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:03.256825    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:03.270885    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:03.270910    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:03.293158    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:03.293167    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:03.304034    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:03.304045    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:03.315487    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:03.315499    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:03.326930    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:03.326940    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:03.364428    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:03.364440    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:03.384952    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:03.384961    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:03.397032    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:03.397041    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:03.414574    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:03.414586    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:03.428579    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:03.428589    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:03.439824    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:03.439835    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:05.978408    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:10.976754    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:10.976928    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:10.994294    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:10.994399    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:11.008107    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:11.008199    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:11.020874    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:11.020956    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:11.033529    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:11.033616    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:11.044451    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:11.044532    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:11.055633    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:11.055725    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:11.065894    4788 logs.go:276] 0 containers: []
	W0919 12:24:11.065907    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:11.065986    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:11.080410    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:11.080429    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:11.080435    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:11.085130    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:11.085137    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:11.104526    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:11.104536    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:11.115885    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:11.115897    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:11.140528    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:11.140536    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:11.178498    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:11.178508    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:11.227043    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:11.227057    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:11.239824    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:11.239835    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:11.256728    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:11.256741    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:11.268970    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:11.268981    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:11.285097    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:11.285109    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:11.303995    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:11.304004    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:11.316140    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:11.316151    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:11.354543    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:11.354555    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:11.368906    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:11.368917    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:11.383385    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:11.383399    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:11.394730    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:11.394739    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:13.910392    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:18.910970    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:18.911235    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:18.935131    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:18.935259    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:18.951297    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:18.951394    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:18.963471    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:18.963562    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:18.975137    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:18.975211    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:18.985972    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:18.986059    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:18.996523    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:18.996613    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:19.007486    4788 logs.go:276] 0 containers: []
	W0919 12:24:19.007500    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:19.007575    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:19.018191    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:19.018209    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:19.018215    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:19.022507    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:19.022514    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:19.036741    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:19.036753    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:19.052041    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:19.052051    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:19.078341    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:19.078356    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:19.094940    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:19.094952    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:19.131880    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:19.131891    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:19.172017    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:19.172028    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:19.184393    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:19.184405    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:19.201953    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:19.201965    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:19.215895    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:19.215907    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:19.230008    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:19.230019    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:19.240829    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:19.240840    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:19.253807    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:19.253817    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:19.291122    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:19.291136    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:19.305304    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:19.305315    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:19.316239    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:19.316250    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:21.830451    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:26.831476    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:26.831729    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:26.857524    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:26.857683    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:26.874936    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:26.875042    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:26.888122    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:26.888223    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:26.899849    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:26.899933    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:26.909834    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:26.909918    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:26.924732    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:26.924820    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:26.934961    4788 logs.go:276] 0 containers: []
	W0919 12:24:26.934979    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:26.935057    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:26.945657    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:26.945673    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:26.945678    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:26.985230    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:26.985242    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:27.028375    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:27.028386    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:27.042757    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:27.042770    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:27.057135    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:27.057145    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:27.072076    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:27.072088    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:27.085703    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:27.085716    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:27.090358    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:27.090368    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:27.101736    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:27.101748    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:27.113772    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:27.113785    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:27.131857    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:27.131867    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:27.144219    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:27.144231    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:27.183882    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:27.183895    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:27.195721    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:27.195733    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:27.207844    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:27.207855    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:27.221883    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:27.221897    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:27.233991    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:27.234003    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:29.760325    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:34.761843    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:34.762011    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:34.773181    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:34.773260    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:34.784044    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:34.784135    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:34.794649    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:34.794730    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:34.806609    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:34.806697    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:34.816931    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:34.817016    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:34.827688    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:34.827775    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:34.842464    4788 logs.go:276] 0 containers: []
	W0919 12:24:34.842477    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:34.842557    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:34.852973    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:34.853014    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:34.853020    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:34.857171    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:34.857177    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:34.871084    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:34.871100    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:34.885408    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:34.885422    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:34.897083    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:34.897097    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:34.908032    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:34.908043    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:34.926146    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:34.926163    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:34.937495    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:34.937505    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:34.948985    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:34.949000    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:34.966108    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:34.966118    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:34.980946    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:34.980956    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:34.992368    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:34.992384    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:35.003930    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:35.003940    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:35.043144    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:35.043153    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:35.086919    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:35.086930    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:35.125139    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:35.125150    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:35.141737    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:35.141747    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:37.667510    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:42.669648    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:42.669988    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:42.696749    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:42.696900    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:42.713849    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:42.713953    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:42.727812    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:42.727909    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:42.739433    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:42.739522    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:42.753983    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:42.754064    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:42.764220    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:42.764306    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:42.774575    4788 logs.go:276] 0 containers: []
	W0919 12:24:42.774588    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:42.774656    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:42.785098    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:42.785115    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:42.785120    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:42.800285    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:42.800295    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:42.812351    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:42.812361    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:42.850460    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:42.850473    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:42.862557    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:42.862570    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:42.900136    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:42.900146    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:42.936949    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:42.936963    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:42.954280    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:42.954291    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:42.968184    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:42.968194    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:42.980236    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:42.980246    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:42.984161    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:42.984170    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:42.998167    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:42.998177    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:43.010061    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:43.010071    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:43.032078    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:43.032087    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:43.043361    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:43.043372    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:43.067084    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:43.067093    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:43.080904    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:43.080915    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:45.593993    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:50.595892    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:50.596182    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:50.618137    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:50.618261    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:50.639141    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:50.639238    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:50.651311    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:50.651389    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:50.661398    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:50.661485    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:50.672066    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:50.672159    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:50.682655    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:50.682729    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:50.694494    4788 logs.go:276] 0 containers: []
	W0919 12:24:50.694514    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:50.694590    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:50.705609    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:50.705628    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:50.705636    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:50.723391    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:50.723400    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:24:50.734455    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:50.734467    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:50.749906    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:50.749919    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:50.763502    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:50.763511    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:50.800604    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:50.800615    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:50.804698    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:50.804704    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:50.816080    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:50.816090    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:50.833297    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:50.833308    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:50.844744    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:50.844754    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:50.883212    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:50.883220    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:50.894588    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:50.894600    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:50.920471    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:50.920478    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:50.932248    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:50.932263    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:50.946257    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:50.946267    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:50.960124    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:50.960134    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:50.972077    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:50.972086    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:53.508259    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:24:58.510254    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:24:58.510455    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:24:58.522071    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:24:58.522165    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:24:58.533374    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:24:58.533475    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:24:58.543944    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:24:58.544030    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:24:58.554659    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:24:58.554748    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:24:58.566023    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:24:58.566107    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:24:58.576771    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:24:58.576846    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:24:58.587426    4788 logs.go:276] 0 containers: []
	W0919 12:24:58.587438    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:24:58.587516    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:24:58.600042    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:24:58.600064    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:24:58.600069    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:24:58.611567    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:24:58.611578    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:24:58.630759    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:24:58.630775    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:24:58.641929    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:24:58.641940    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:24:58.683426    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:24:58.683436    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:24:58.698440    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:24:58.698452    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:24:58.717626    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:24:58.717635    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:24:58.729750    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:24:58.729764    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:24:58.741386    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:24:58.741396    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:24:58.753923    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:24:58.753939    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:24:58.804458    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:24:58.804468    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:24:58.820805    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:24:58.820819    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:24:58.835200    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:24:58.835214    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:24:58.859645    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:24:58.859653    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:24:58.898097    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:24:58.898105    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:24:58.902539    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:24:58.902545    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:24:58.917245    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:24:58.917257    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:01.429302    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:06.431385    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:06.431592    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:06.445129    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:06.445228    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:06.456219    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:06.456310    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:06.466596    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:06.466685    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:06.477264    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:06.477351    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:06.487903    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:06.487985    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:06.498735    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:06.498812    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:06.509136    4788 logs.go:276] 0 containers: []
	W0919 12:25:06.509146    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:06.509220    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:06.519210    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:06.519227    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:06.519233    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:06.531174    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:06.531184    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:06.542177    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:06.542189    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:06.546656    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:06.546667    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:06.560450    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:06.560460    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:06.598339    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:06.598350    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:06.613285    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:06.613294    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:06.625189    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:06.625200    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:06.642470    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:06.642480    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:06.681595    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:06.681603    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:06.715876    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:06.715887    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:06.732275    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:06.732284    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:06.743714    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:06.743723    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:06.768287    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:06.768295    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:06.779928    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:06.779939    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:06.795035    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:06.795046    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:06.810249    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:06.810259    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:09.323985    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:14.326209    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:14.326598    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:14.354862    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:14.355017    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:14.373679    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:14.373798    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:14.393367    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:14.393451    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:14.405049    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:14.405137    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:14.416422    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:14.416509    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:14.427218    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:14.427302    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:14.439933    4788 logs.go:276] 0 containers: []
	W0919 12:25:14.439950    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:14.440022    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:14.450966    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:14.450988    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:14.450994    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:14.489724    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:14.489740    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:14.503867    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:14.503881    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:14.522755    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:14.522769    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:14.534365    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:14.534375    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:14.557442    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:14.557450    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:14.594542    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:14.594551    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:14.628843    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:14.628858    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:14.641445    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:14.641457    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:14.656918    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:14.656934    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:14.671641    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:14.671653    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:14.683945    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:14.683957    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:14.702623    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:14.702634    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:14.714336    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:14.714347    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:14.728131    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:14.728142    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:14.732636    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:14.732643    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:14.744194    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:14.744206    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:17.256062    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:22.258310    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:22.258537    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:22.276549    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:22.276663    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:22.289761    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:22.289854    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:22.301201    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:22.301293    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:22.311874    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:22.311965    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:22.322312    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:22.322396    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:22.333620    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:22.333698    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:22.345268    4788 logs.go:276] 0 containers: []
	W0919 12:25:22.345278    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:22.345344    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:22.355676    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:22.355695    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:22.355701    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:22.370522    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:22.370532    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:22.384728    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:22.384738    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:22.421782    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:22.421795    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:22.426343    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:22.426351    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:22.443102    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:22.443118    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:22.454544    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:22.454561    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:22.466723    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:22.466734    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:22.483537    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:22.483547    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:22.522280    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:22.522291    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:22.539969    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:22.539980    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:22.563634    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:22.563643    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:22.581039    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:22.581051    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:22.616736    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:22.616747    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:22.630420    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:22.630428    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:22.645007    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:22.645017    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:22.660395    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:22.660406    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:25.173854    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:30.176107    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:30.176562    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:30.207946    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:30.208101    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:30.226238    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:30.226356    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:30.242255    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:30.242349    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:30.254137    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:30.254225    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:30.264625    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:30.264708    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:30.275274    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:30.275360    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:30.286236    4788 logs.go:276] 0 containers: []
	W0919 12:25:30.286250    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:30.286327    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:30.296979    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:30.296998    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:30.297003    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:30.311527    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:30.311538    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:30.329112    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:30.329124    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:30.340638    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:30.340652    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:30.364944    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:30.364952    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:30.405230    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:30.405242    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:30.419939    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:30.419953    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:30.433186    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:30.433198    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:30.455831    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:30.455846    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:30.460220    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:30.460226    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:30.471064    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:30.471075    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:30.482472    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:30.482485    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:30.496429    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:30.496444    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:30.536729    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:30.536748    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:30.575659    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:30.575676    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:30.587843    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:30.587854    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:30.599208    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:30.599218    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:33.115522    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:38.118114    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:38.118416    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:38.150215    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:38.150329    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:38.165237    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:38.165336    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:38.177346    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:38.177429    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:38.188168    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:38.188252    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:38.198804    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:38.198892    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:38.209356    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:38.209446    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:38.220815    4788 logs.go:276] 0 containers: []
	W0919 12:25:38.220826    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:38.220902    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:38.235431    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:38.235453    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:38.235459    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:38.248325    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:38.248338    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:38.283120    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:38.283135    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:38.307247    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:38.307255    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:38.318771    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:38.318781    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:38.332829    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:38.332840    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:38.343863    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:38.343873    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:38.358621    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:38.358636    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:38.371059    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:38.371072    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:38.397968    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:38.397978    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:38.409828    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:38.409838    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:38.414127    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:38.414133    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:38.452351    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:38.452370    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:38.466061    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:38.466071    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:38.480152    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:38.480161    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:38.494306    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:38.494322    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:38.510917    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:38.510926    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:41.052523    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:46.054683    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:46.054952    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:46.075527    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:46.075642    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:46.096497    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:46.096582    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:46.107622    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:46.107703    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:46.117930    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:46.118012    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:46.128228    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:46.128313    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:46.138629    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:46.138702    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:46.148599    4788 logs.go:276] 0 containers: []
	W0919 12:25:46.148613    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:46.148688    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:46.158963    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:46.158979    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:46.158985    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:46.174253    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:46.174264    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:46.188689    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:46.188700    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:46.203799    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:46.203810    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:46.214931    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:46.214942    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:46.226218    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:46.226230    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:46.237834    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:46.237847    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:46.275753    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:46.275766    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:46.313685    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:46.313695    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:46.325963    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:46.325974    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:46.344422    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:46.344440    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:46.359283    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:46.359295    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:46.363670    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:46.363680    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:46.401244    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:46.401257    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:46.425216    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:46.425232    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:46.444017    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:46.444030    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:46.460201    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:46.460212    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:48.974895    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:25:53.977474    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:25:53.977686    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:25:53.995043    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:25:53.995150    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:25:54.008260    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:25:54.008351    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:25:54.019673    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:25:54.019757    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:25:54.030486    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:25:54.030577    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:25:54.041644    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:25:54.041732    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:25:54.052323    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:25:54.052413    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:25:54.063510    4788 logs.go:276] 0 containers: []
	W0919 12:25:54.063522    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:25:54.063589    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:25:54.073919    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:25:54.073934    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:25:54.073939    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:25:54.108073    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:25:54.108088    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:25:54.145280    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:25:54.145290    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:25:54.161117    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:25:54.161135    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:25:54.186864    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:25:54.186877    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:25:54.199443    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:25:54.199459    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:25:54.217946    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:25:54.217955    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:25:54.230936    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:25:54.230947    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:25:54.272131    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:25:54.272146    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:25:54.291151    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:25:54.291165    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:25:54.308151    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:25:54.308170    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:25:54.313174    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:25:54.313183    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:25:54.328616    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:25:54.328632    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:25:54.340997    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:25:54.341011    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:25:54.353432    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:25:54.353444    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:25:54.369213    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:25:54.369228    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:25:54.382732    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:25:54.382744    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:25:56.896931    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:01.899169    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:01.899371    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:01.912399    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:01.912495    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:01.923975    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:01.924054    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:01.934933    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:01.935022    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:01.945389    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:01.945467    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:01.955925    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:01.955996    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:01.966530    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:01.966595    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:01.976979    4788 logs.go:276] 0 containers: []
	W0919 12:26:01.976993    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:01.977069    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:01.988095    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:01.988112    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:01.988118    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:02.002107    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:02.002119    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:02.013292    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:02.013303    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:02.029027    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:02.029041    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:02.041699    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:02.041714    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:02.054614    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:02.054622    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:02.067465    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:02.067481    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:02.109000    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:02.109011    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:02.113777    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:02.113787    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:02.132756    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:02.132766    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:02.150965    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:02.150982    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:02.169846    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:02.169864    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:02.210639    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:02.210651    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:02.230849    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:02.230861    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:02.248247    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:02.248260    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:02.285673    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:02.285683    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:02.298770    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:02.298785    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:04.824314    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:09.826400    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:09.826541    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:09.838099    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:09.838187    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:09.848591    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:09.848677    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:09.859424    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:09.859503    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:09.869979    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:09.870065    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:09.880557    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:09.880644    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:09.891486    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:09.891552    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:09.902325    4788 logs.go:276] 0 containers: []
	W0919 12:26:09.902332    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:09.902371    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:09.913571    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:09.913589    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:09.913596    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:09.951473    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:09.951482    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:09.966584    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:09.966604    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:09.979052    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:09.979064    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:09.998062    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:09.998078    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:10.022971    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:10.022986    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:10.036828    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:10.036840    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:10.077447    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:10.077467    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:10.097188    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:10.097201    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:10.138374    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:10.138388    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:10.154084    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:10.154100    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:10.167341    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:10.167355    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:10.185869    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:10.185886    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:10.200777    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:10.200794    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:10.217078    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:10.217093    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:10.229432    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:10.229444    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:10.242491    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:10.242503    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:12.749306    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:17.750893    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:17.751226    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:17.785415    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:17.785526    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:17.813055    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:17.813148    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:17.835161    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:17.835363    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:17.852517    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:17.852704    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:17.864531    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:17.864618    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:17.875748    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:17.875836    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:17.886705    4788 logs.go:276] 0 containers: []
	W0919 12:26:17.886717    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:17.886794    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:17.898106    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:17.898123    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:17.898129    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:17.920746    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:17.920755    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:17.959528    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:17.959542    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:17.975408    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:17.975417    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:18.015603    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:18.015614    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:18.031608    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:18.031617    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:18.046585    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:18.046601    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:18.051189    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:18.051203    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:18.064216    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:18.064229    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:18.077194    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:18.077208    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:18.089572    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:18.089583    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:18.108219    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:18.108228    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:18.128838    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:18.128847    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:18.149846    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:18.149859    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:18.165909    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:18.165921    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:18.201021    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:18.201034    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:18.212291    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:18.212304    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:20.726147    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:25.728432    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:25.728512    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:25.740807    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:25.740895    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:25.752386    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:25.752513    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:25.764204    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:25.764290    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:25.777007    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:25.777094    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:25.788470    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:25.788566    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:25.804435    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:25.804521    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:25.814871    4788 logs.go:276] 0 containers: []
	W0919 12:26:25.814882    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:25.814962    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:25.826347    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:25.826366    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:25.826374    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:25.840137    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:25.840150    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:25.858658    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:25.858670    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:25.900407    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:25.900429    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:25.905486    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:25.905499    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:25.917863    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:25.917877    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:25.932557    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:25.932566    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:25.947442    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:25.947451    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:25.963710    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:25.963726    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:25.980131    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:25.980143    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:26.006551    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:26.006569    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:26.019375    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:26.019390    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:26.040029    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:26.040039    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:26.054942    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:26.054959    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:26.067777    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:26.067790    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:26.103740    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:26.103754    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:26.118840    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:26.118850    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:28.659095    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:33.659553    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:33.659631    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:33.671221    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:33.671315    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:33.682956    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:33.683048    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:33.694396    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:33.694481    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:33.706691    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:33.706774    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:33.719138    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:33.719216    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:33.730713    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:33.730805    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:33.741968    4788 logs.go:276] 0 containers: []
	W0919 12:26:33.741979    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:33.742057    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:33.753298    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:33.753317    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:33.753324    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:33.768670    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:33.768683    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:33.782114    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:33.782127    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:33.798234    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:33.798245    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:33.810526    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:33.810538    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:33.825396    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:33.825412    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:33.837947    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:33.837958    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:33.857655    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:33.857666    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:33.871340    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:33.871352    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:33.913811    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:33.913833    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:33.919381    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:33.919394    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:33.960094    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:33.960106    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:33.974931    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:33.974946    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:33.990514    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:33.990529    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:34.014725    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:34.014733    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:34.050433    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:34.050448    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:34.062135    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:34.062150    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:36.576279    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:41.576870    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:41.576985    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:41.588205    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:41.588291    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:41.600026    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:41.600071    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:41.613000    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:41.613079    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:41.626157    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:41.626244    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:41.637468    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:41.637556    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:41.655449    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:41.655540    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:41.668610    4788 logs.go:276] 0 containers: []
	W0919 12:26:41.668621    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:41.668699    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:41.679652    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:41.679671    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:41.679677    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:41.698522    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:41.698534    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:41.711280    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:41.711293    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:41.724413    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:41.724427    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:41.728603    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:41.728614    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:41.740496    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:41.740512    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:41.756716    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:41.756730    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:41.772665    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:41.772686    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:41.785489    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:41.785500    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:41.825604    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:41.825627    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:41.865740    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:41.865762    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:41.885641    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:41.885657    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:41.922416    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:41.922427    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:41.939593    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:41.939608    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:41.965796    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:41.965817    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:41.987292    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:41.987306    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:42.000805    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:42.000820    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:44.514361    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:49.514466    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:49.514566    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:26:49.525965    4788 logs.go:276] 2 containers: [ca8b4def2e91 6e24dc0306c2]
	I0919 12:26:49.526050    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:26:49.537315    4788 logs.go:276] 2 containers: [774ea5b64f89 219994403f67]
	I0919 12:26:49.537407    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:26:49.548371    4788 logs.go:276] 1 containers: [bd41a847495f]
	I0919 12:26:49.548455    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:26:49.559693    4788 logs.go:276] 2 containers: [d59d211d9238 a04ca8cc8c56]
	I0919 12:26:49.559787    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:26:49.571410    4788 logs.go:276] 1 containers: [ab665f2acfb4]
	I0919 12:26:49.571488    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:26:49.582678    4788 logs.go:276] 2 containers: [2aff8a274695 9ceebd9f5b94]
	I0919 12:26:49.582764    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:26:49.597921    4788 logs.go:276] 0 containers: []
	W0919 12:26:49.597933    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:26:49.598008    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:26:49.609990    4788 logs.go:276] 2 containers: [a354c60dcbaa a54fd3866b47]
	I0919 12:26:49.610011    4788 logs.go:123] Gathering logs for kube-proxy [ab665f2acfb4] ...
	I0919 12:26:49.610016    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab665f2acfb4"
	I0919 12:26:49.626559    4788 logs.go:123] Gathering logs for storage-provisioner [a354c60dcbaa] ...
	I0919 12:26:49.626572    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a354c60dcbaa"
	I0919 12:26:49.639153    4788 logs.go:123] Gathering logs for kube-apiserver [ca8b4def2e91] ...
	I0919 12:26:49.639166    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8b4def2e91"
	I0919 12:26:49.657067    4788 logs.go:123] Gathering logs for kube-apiserver [6e24dc0306c2] ...
	I0919 12:26:49.657083    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e24dc0306c2"
	I0919 12:26:49.702565    4788 logs.go:123] Gathering logs for etcd [219994403f67] ...
	I0919 12:26:49.702579    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 219994403f67"
	I0919 12:26:49.719375    4788 logs.go:123] Gathering logs for kube-scheduler [d59d211d9238] ...
	I0919 12:26:49.719388    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59d211d9238"
	I0919 12:26:49.732469    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:26:49.732481    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:26:49.745449    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:26:49.745462    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:26:49.785155    4788 logs.go:123] Gathering logs for etcd [774ea5b64f89] ...
	I0919 12:26:49.785171    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 774ea5b64f89"
	I0919 12:26:49.799235    4788 logs.go:123] Gathering logs for kube-controller-manager [2aff8a274695] ...
	I0919 12:26:49.799246    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2aff8a274695"
	I0919 12:26:49.818635    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:26:49.818646    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:26:49.842415    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:26:49.842425    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:26:49.847030    4788 logs.go:123] Gathering logs for coredns [bd41a847495f] ...
	I0919 12:26:49.847037    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd41a847495f"
	I0919 12:26:49.864243    4788 logs.go:123] Gathering logs for kube-scheduler [a04ca8cc8c56] ...
	I0919 12:26:49.864257    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a04ca8cc8c56"
	I0919 12:26:49.883008    4788 logs.go:123] Gathering logs for kube-controller-manager [9ceebd9f5b94] ...
	I0919 12:26:49.883023    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ceebd9f5b94"
	I0919 12:26:49.896667    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:26:49.896677    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:26:49.930419    4788 logs.go:123] Gathering logs for storage-provisioner [a54fd3866b47] ...
	I0919 12:26:49.930430    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54fd3866b47"
	I0919 12:26:52.447923    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:26:57.450084    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:26:57.450131    4788 kubeadm.go:597] duration metric: took 4m4.2656165s to restartPrimaryControlPlane
	W0919 12:26:57.450160    4788 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0919 12:26:57.450175    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0919 12:26:58.445905    4788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 12:26:58.451425    4788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 12:26:58.454186    4788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 12:26:58.457005    4788 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 12:26:58.457011    4788 kubeadm.go:157] found existing configuration files:
	
	I0919 12:26:58.457039    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf
	I0919 12:26:58.459508    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 12:26:58.459538    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 12:26:58.461962    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf
	I0919 12:26:58.464698    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 12:26:58.464725    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 12:26:58.467223    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf
	I0919 12:26:58.469978    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 12:26:58.470003    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 12:26:58.473190    4788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf
	I0919 12:26:58.475943    4788 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 12:26:58.475971    4788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 12:26:58.478457    4788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 12:26:58.495279    4788 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0919 12:26:58.495340    4788 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 12:26:58.543136    4788 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 12:26:58.543188    4788 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 12:26:58.543231    4788 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 12:26:58.592453    4788 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 12:26:58.600631    4788 out.go:235]   - Generating certificates and keys ...
	I0919 12:26:58.600664    4788 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 12:26:58.600704    4788 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 12:26:58.600754    4788 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 12:26:58.600793    4788 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0919 12:26:58.600828    4788 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 12:26:58.600860    4788 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0919 12:26:58.600894    4788 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0919 12:26:58.600930    4788 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0919 12:26:58.600967    4788 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 12:26:58.601003    4788 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 12:26:58.601023    4788 kubeadm.go:310] [certs] Using the existing "sa" key
	I0919 12:26:58.601060    4788 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 12:26:58.673442    4788 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 12:26:58.854254    4788 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 12:26:58.919295    4788 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 12:26:59.136297    4788 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 12:26:59.166305    4788 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 12:26:59.166827    4788 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 12:26:59.166950    4788 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 12:26:59.243951    4788 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 12:26:59.248165    4788 out.go:235]   - Booting up control plane ...
	I0919 12:26:59.248229    4788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 12:26:59.248286    4788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 12:26:59.248329    4788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 12:26:59.248409    4788 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 12:26:59.248488    4788 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 12:27:04.249880    4788 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002629 seconds
	I0919 12:27:04.249960    4788 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 12:27:04.254044    4788 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 12:27:04.767823    4788 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 12:27:04.768088    4788 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-269000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 12:27:05.273090    4788 kubeadm.go:310] [bootstrap-token] Using token: gqikgj.g8ry9h3d1m1lhgda
	I0919 12:27:05.276261    4788 out.go:235]   - Configuring RBAC rules ...
	I0919 12:27:05.276319    4788 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 12:27:05.276359    4788 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 12:27:05.282493    4788 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 12:27:05.283664    4788 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 12:27:05.284875    4788 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 12:27:05.285982    4788 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 12:27:05.290095    4788 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 12:27:05.482907    4788 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 12:27:05.676810    4788 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 12:27:05.677409    4788 kubeadm.go:310] 
	I0919 12:27:05.677442    4788 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 12:27:05.677445    4788 kubeadm.go:310] 
	I0919 12:27:05.677480    4788 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 12:27:05.677483    4788 kubeadm.go:310] 
	I0919 12:27:05.677496    4788 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 12:27:05.677531    4788 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 12:27:05.677559    4788 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 12:27:05.677564    4788 kubeadm.go:310] 
	I0919 12:27:05.677592    4788 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 12:27:05.677596    4788 kubeadm.go:310] 
	I0919 12:27:05.677622    4788 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 12:27:05.677626    4788 kubeadm.go:310] 
	I0919 12:27:05.677650    4788 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 12:27:05.677686    4788 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 12:27:05.677724    4788 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 12:27:05.677727    4788 kubeadm.go:310] 
	I0919 12:27:05.677769    4788 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 12:27:05.677809    4788 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 12:27:05.677812    4788 kubeadm.go:310] 
	I0919 12:27:05.677852    4788 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gqikgj.g8ry9h3d1m1lhgda \
	I0919 12:27:05.677902    4788 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e0c2857de0258e65a9bba263f6157106d84e898a6b55abbe378b8f48b6c815 \
	I0919 12:27:05.677913    4788 kubeadm.go:310] 	--control-plane 
	I0919 12:27:05.677919    4788 kubeadm.go:310] 
	I0919 12:27:05.677963    4788 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 12:27:05.677969    4788 kubeadm.go:310] 
	I0919 12:27:05.678020    4788 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gqikgj.g8ry9h3d1m1lhgda \
	I0919 12:27:05.678078    4788 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e0c2857de0258e65a9bba263f6157106d84e898a6b55abbe378b8f48b6c815 
	I0919 12:27:05.678254    4788 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 12:27:05.678311    4788 cni.go:84] Creating CNI manager for ""
	I0919 12:27:05.678320    4788 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:27:05.684593    4788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 12:27:05.687931    4788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 12:27:05.690791    4788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 12:27:05.695933    4788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 12:27:05.696005    4788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-269000 minikube.k8s.io/updated_at=2024_09_19T12_27_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=stopped-upgrade-269000 minikube.k8s.io/primary=true
	I0919 12:27:05.696007    4788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 12:27:05.732025    4788 kubeadm.go:1113] duration metric: took 36.062583ms to wait for elevateKubeSystemPrivileges
	I0919 12:27:05.740905    4788 ops.go:34] apiserver oom_adj: -16
	I0919 12:27:05.740921    4788 kubeadm.go:394] duration metric: took 4m12.572005166s to StartCluster
	I0919 12:27:05.740934    4788 settings.go:142] acquiring lock: {Name:mk40c96dc3647741b89517369d27068bccfc0e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:27:05.741027    4788 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:27:05.741444    4788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/kubeconfig: {Name:mk8a8f1f5779f30829ec51973ad05815f1640da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:27:05.742012    4788 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:27:05.742037    4788 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 12:27:05.742073    4788 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-269000"
	I0919 12:27:05.742091    4788 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-269000"
	I0919 12:27:05.742093    4788 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-269000"
	W0919 12:27:05.742096    4788 addons.go:243] addon storage-provisioner should already be in state true
	I0919 12:27:05.742097    4788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-269000"
	I0919 12:27:05.742100    4788 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:27:05.742109    4788 host.go:66] Checking if "stopped-upgrade-269000" exists ...
	I0919 12:27:05.743012    4788 kapi.go:59] client config for stopped-upgrade-269000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/stopped-upgrade-269000/client.key", CAFile:"/Users/jenkins/minikube-integration/19664-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104009800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 12:27:05.743136    4788 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-269000"
	W0919 12:27:05.743141    4788 addons.go:243] addon default-storageclass should already be in state true
	I0919 12:27:05.743147    4788 host.go:66] Checking if "stopped-upgrade-269000" exists ...
	I0919 12:27:05.746615    4788 out.go:177] * Verifying Kubernetes components...
	I0919 12:27:05.746937    4788 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 12:27:05.749715    4788 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 12:27:05.749723    4788 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/id_rsa Username:docker}
	I0919 12:27:05.753557    4788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 12:27:05.757624    4788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 12:27:05.761609    4788 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 12:27:05.761615    4788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 12:27:05.761622    4788 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/stopped-upgrade-269000/id_rsa Username:docker}
	I0919 12:27:05.851506    4788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 12:27:05.856410    4788 api_server.go:52] waiting for apiserver process to appear ...
	I0919 12:27:05.856460    4788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 12:27:05.860089    4788 api_server.go:72] duration metric: took 118.068709ms to wait for apiserver process to appear ...
	I0919 12:27:05.860097    4788 api_server.go:88] waiting for apiserver healthz status ...
	I0919 12:27:05.860104    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:05.865749    4788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 12:27:05.889700    4788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 12:27:06.216690    4788 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 12:27:06.216703    4788 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 12:27:10.861807    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:10.861847    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:15.861924    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:15.861942    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:20.862433    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:20.862457    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:25.862687    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:25.862742    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:30.863181    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:30.863210    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:35.863769    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:35.863811    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0919 12:27:36.218201    4788 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0919 12:27:36.227315    4788 out.go:177] * Enabled addons: storage-provisioner
	I0919 12:27:36.234522    4788 addons.go:510] duration metric: took 30.493432917s for enable addons: enabled=[storage-provisioner]
	I0919 12:27:40.864642    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:40.864687    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:45.865729    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:45.865770    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:50.867146    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:50.867189    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:27:55.868935    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:27:55.868960    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:28:00.871024    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:28:00.871068    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:28:05.873326    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:28:05.873834    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:28:05.948250    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:28:05.948349    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:28:05.964506    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:28:05.964597    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:28:05.976093    4788 logs.go:276] 2 containers: [3590f2fec45b 54f3cd388f87]
	I0919 12:28:05.976172    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:28:05.986735    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:28:05.986810    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:28:05.997057    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:28:05.997133    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:28:06.007278    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:28:06.007364    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:28:06.017863    4788 logs.go:276] 0 containers: []
	W0919 12:28:06.017874    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:28:06.017950    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:28:06.028230    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:28:06.028244    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:28:06.028249    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:28:06.039690    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:28:06.039700    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:28:06.051563    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:28:06.051573    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:28:06.063371    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:28:06.063382    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:28:06.080597    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:28:06.080608    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:28:06.091878    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:28:06.091887    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:28:06.116587    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:28:06.116593    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:28:06.120656    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:28:06.120664    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:28:06.159611    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:28:06.159627    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:28:06.174487    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:28:06.174496    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:28:06.192610    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:28:06.192619    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:28:06.216195    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:28:06.216206    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:28:06.227323    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:28:06.227336    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:28:08.767543    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:28:13.770265    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:28:13.770872    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:28:13.810384    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:28:13.810549    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:28:13.835128    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:28:13.835242    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:28:13.849715    4788 logs.go:276] 2 containers: [3590f2fec45b 54f3cd388f87]
	I0919 12:28:13.849804    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:28:13.861811    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:28:13.861895    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:28:13.872865    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:28:13.872955    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:28:13.882878    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:28:13.882961    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:28:13.893042    4788 logs.go:276] 0 containers: []
	W0919 12:28:13.893054    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:28:13.893127    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:28:13.903923    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:28:13.903942    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:28:13.903948    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:28:13.922459    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:28:13.922470    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:28:13.959483    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:28:13.959492    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:28:13.976200    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:28:13.976215    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:28:13.990095    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:28:13.990103    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:28:14.001913    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:28:14.001923    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:28:14.016963    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:28:14.016974    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:28:14.028065    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:28:14.028075    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:28:14.039378    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:28:14.039392    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:28:14.062952    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:28:14.062964    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:28:14.067440    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:28:14.067446    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:28:14.104690    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:28:14.104700    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:28:14.116583    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:28:14.116593    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:28:16.629848    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:28:21.630626    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:28:21.631284    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:28:21.671281    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:28:21.671456    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:28:21.693319    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:28:21.693459    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:28:21.708658    4788 logs.go:276] 2 containers: [3590f2fec45b 54f3cd388f87]
	I0919 12:28:21.708746    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:28:21.721352    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:28:21.721433    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:28:21.731902    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:28:21.731989    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:28:21.742891    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:28:21.742981    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:28:21.753264    4788 logs.go:276] 0 containers: []
	W0919 12:28:21.753275    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:28:21.753346    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:28:21.763309    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:28:21.763327    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:28:21.763332    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:28:21.775540    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:28:21.775553    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:28:21.796341    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:28:21.796353    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:28:21.808022    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:28:21.808036    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:28:21.845819    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:28:21.845826    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:28:21.880205    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:28:21.880217    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:28:21.895208    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:28:21.895217    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:28:21.906948    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:28:21.906958    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:28:21.922201    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:28:21.922212    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:28:21.936647    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:28:21.936658    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:28:21.960203    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:28:21.960210    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:28:21.964174    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:28:21.964180    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:28:21.977476    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:28:21.977488    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:28:24.490772    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:28:29.493365    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:28:29.493921    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:28:29.535507    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:28:29.535668    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:28:29.559673    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:28:29.559798    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:28:29.574610    4788 logs.go:276] 2 containers: [3590f2fec45b 54f3cd388f87]
	I0919 12:28:29.574698    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:28:29.587045    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:28:29.587125    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:28:29.597920    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:28:29.597992    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:28:29.608672    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:28:29.608740    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:28:29.619568    4788 logs.go:276] 0 containers: []
	W0919 12:28:29.619582    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:28:29.619657    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:28:29.630150    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:28:29.630166    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:28:29.630172    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:28:29.666808    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:28:29.666816    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:28:29.681542    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:28:29.681552    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:28:29.695263    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:28:29.695273    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:28:29.708071    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:28:29.708082    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:28:29.719101    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:28:29.719111    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:28:29.742287    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:28:29.742294    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:28:29.746788    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:28:29.746797    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:28:29.780969    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:28:29.780981    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:28:29.793036    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:28:29.793051    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:28:29.808252    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:28:29.808265    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:28:29.824506    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:28:29.824517    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:28:29.842261    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:28:29.842271    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:28:32.359494    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:28:37.359766    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:28:37.360048    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:28:37.381242    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:28:37.381358    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:28:37.395561    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:28:37.395642    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:28:37.412274    4788 logs.go:276] 2 containers: [3590f2fec45b 54f3cd388f87]
	I0919 12:28:37.412352    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:28:37.422527    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:28:37.422613    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:28:37.432832    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:28:37.432913    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:28:37.443391    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:28:37.443477    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:28:37.453649    4788 logs.go:276] 0 containers: []
	W0919 12:28:37.453660    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:28:37.453724    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:28:37.463966    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:28:37.463980    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:28:37.463986    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:28:37.474638    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:28:37.474649    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:28:37.489701    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:28:37.489711    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:28:37.507648    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:28:37.507659    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:28:37.518657    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:28:37.518667    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:28:37.557231    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:28:37.557241    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:28:37.561875    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:28:37.561883    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:28:37.576161    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:28:37.576172    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:28:37.589634    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:28:37.589648    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:28:37.613378    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:28:37.613386    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:28:37.651479    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:28:37.651493    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:28:37.663040    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:28:37.663051    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:28:37.674240    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:28:37.674250    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:28:40.185887    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:28:45.186776    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:28:45.187371    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:28:45.229688    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:28:45.229857    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:28:45.251034    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:28:45.251185    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:28:45.266870    4788 logs.go:276] 2 containers: [3590f2fec45b 54f3cd388f87]
	I0919 12:28:45.266960    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:28:45.279565    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:28:45.279656    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:28:45.290293    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:28:45.290399    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:28:45.300594    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:28:45.300671    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:28:45.310515    4788 logs.go:276] 0 containers: []
	W0919 12:28:45.310532    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:28:45.310602    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:28:45.325936    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:28:45.325955    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:28:45.325961    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:28:45.364840    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:28:45.364849    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:28:45.369187    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:28:45.369196    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:28:45.382566    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:28:45.382578    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:28:45.394125    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:28:45.394135    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:28:45.417960    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:28:45.417968    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:28:45.429178    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:28:45.429188    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:28:45.466772    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:28:45.466787    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:28:45.481360    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:28:45.481376    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:28:45.493637    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:28:45.493649    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:28:45.505834    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:28:45.505843    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:28:45.520999    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:28:45.521011    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:28:45.538325    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:28:45.538335    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:28:48.057804    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:28:53.060435    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:28:53.060907    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:28:53.107651    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:28:53.107833    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:28:53.126427    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:28:53.126532    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:28:53.140369    4788 logs.go:276] 2 containers: [3590f2fec45b 54f3cd388f87]
	I0919 12:28:53.140468    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:28:53.152154    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:28:53.152233    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:28:53.166945    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:28:53.167031    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:28:53.177825    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:28:53.177902    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:28:53.188705    4788 logs.go:276] 0 containers: []
	W0919 12:28:53.188720    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:28:53.188795    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:28:53.199261    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:28:53.199275    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:28:53.199281    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:28:53.203441    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:28:53.203451    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:28:53.220042    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:28:53.220052    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:28:53.231473    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:28:53.231486    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:28:53.254802    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:28:53.254811    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:28:53.265625    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:28:53.265638    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:28:53.302753    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:28:53.302764    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:28:53.337536    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:28:53.337547    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:28:53.351980    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:28:53.351998    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:28:53.366189    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:28:53.366203    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:28:53.379324    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:28:53.379338    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:28:53.390111    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:28:53.390122    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:28:53.404610    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:28:53.404625    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:28:55.924836    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:29:00.927443    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:29:00.927802    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:29:00.958394    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:29:00.958580    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:29:00.981001    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:29:00.981138    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:29:00.997266    4788 logs.go:276] 2 containers: [3590f2fec45b 54f3cd388f87]
	I0919 12:29:00.997366    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:29:01.010569    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:29:01.010648    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:29:01.021279    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:29:01.021368    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:29:01.031838    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:29:01.031905    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:29:01.041399    4788 logs.go:276] 0 containers: []
	W0919 12:29:01.041414    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:29:01.041483    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:29:01.051832    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:29:01.051848    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:29:01.051853    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:29:01.063538    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:29:01.063550    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:29:01.075196    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:29:01.075209    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:29:01.090325    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:29:01.090338    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:29:01.104892    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:29:01.104903    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:29:01.122182    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:29:01.122191    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:29:01.161763    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:29:01.161776    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:29:01.196556    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:29:01.196572    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:29:01.211724    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:29:01.211737    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:29:01.235554    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:29:01.235572    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:29:01.247766    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:29:01.247778    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:29:01.252185    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:29:01.252195    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:29:01.267689    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:29:01.267700    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:29:03.781795    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:29:08.784106    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:29:08.784628    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:29:08.820344    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:29:08.820507    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:29:08.839816    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:29:08.839930    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:29:08.853637    4788 logs.go:276] 2 containers: [3590f2fec45b 54f3cd388f87]
	I0919 12:29:08.853738    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:29:08.870024    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:29:08.870129    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:29:08.880647    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:29:08.880728    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:29:08.891129    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:29:08.891217    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:29:08.900820    4788 logs.go:276] 0 containers: []
	W0919 12:29:08.900833    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:29:08.900914    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:29:08.910764    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:29:08.910779    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:29:08.910786    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:29:08.914905    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:29:08.914914    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:29:08.952779    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:29:08.952791    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:29:08.967055    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:29:08.967069    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:29:08.978773    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:29:08.978785    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:29:08.990529    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:29:08.990540    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:29:09.001741    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:29:09.001754    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:29:09.026426    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:29:09.026434    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:29:09.065036    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:29:09.065042    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:29:09.076716    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:29:09.076730    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:29:09.091687    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:29:09.091696    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:29:09.108764    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:29:09.108774    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:29:09.119959    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:29:09.119974    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:29:11.635895    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:29:16.638123    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:29:16.638588    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:29:16.675498    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:29:16.675651    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:29:16.695312    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:29:16.695422    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:29:16.713118    4788 logs.go:276] 2 containers: [3590f2fec45b 54f3cd388f87]
	I0919 12:29:16.713212    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:29:16.725258    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:29:16.725342    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:29:16.735880    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:29:16.735964    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:29:16.746374    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:29:16.746455    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:29:16.756580    4788 logs.go:276] 0 containers: []
	W0919 12:29:16.756593    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:29:16.756667    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:29:16.766745    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:29:16.766761    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:29:16.766766    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:29:16.778532    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:29:16.778542    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:29:16.794053    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:29:16.794062    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:29:16.812709    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:29:16.812720    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:29:16.851100    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:29:16.851108    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:29:16.855091    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:29:16.855100    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:29:16.889082    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:29:16.889093    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:29:16.908252    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:29:16.908262    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:29:16.924496    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:29:16.924506    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:29:16.947754    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:29:16.947763    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:29:16.959011    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:29:16.959024    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:29:16.972389    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:29:16.972399    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:29:16.985001    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:29:16.985011    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:29:19.498875    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:29:24.501515    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:29:24.501586    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:29:24.512481    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:29:24.512548    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:29:24.523649    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:29:24.523721    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:29:24.536193    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:29:24.536288    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:29:24.548379    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:29:24.548442    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:29:24.559008    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:29:24.559074    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:29:24.571367    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:29:24.571459    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:29:24.586201    4788 logs.go:276] 0 containers: []
	W0919 12:29:24.586214    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:29:24.586278    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:29:24.597774    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:29:24.597795    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:29:24.597800    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:29:24.610861    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:29:24.610874    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:29:24.626713    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:29:24.626724    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:29:24.653555    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:29:24.653566    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:29:24.691920    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:29:24.691933    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:29:24.702963    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:29:24.702974    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:29:24.715514    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:29:24.715523    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:29:24.729995    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:29:24.730006    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:29:24.734497    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:29:24.734506    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:29:24.774032    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:29:24.774046    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:29:24.790074    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:29:24.790086    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:29:24.804096    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:29:24.804107    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:29:24.821379    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:29:24.821389    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:29:24.839199    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:29:24.839216    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:29:24.854731    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:29:24.854746    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:29:27.371515    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:29:32.372514    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:29:32.373003    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:29:32.405829    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:29:32.405984    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:29:32.424486    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:29:32.424595    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:29:32.438963    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:29:32.439048    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:29:32.450522    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:29:32.450606    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:29:32.460779    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:29:32.460868    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:29:32.471199    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:29:32.471279    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:29:32.481480    4788 logs.go:276] 0 containers: []
	W0919 12:29:32.481492    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:29:32.481564    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:29:32.491929    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:29:32.491952    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:29:32.491957    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:29:32.503986    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:29:32.504000    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:29:32.516132    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:29:32.516144    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:29:32.551783    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:29:32.551794    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:29:32.566579    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:29:32.566588    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:29:32.584085    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:29:32.584094    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:29:32.609687    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:29:32.609695    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:29:32.621656    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:29:32.621670    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:29:32.633949    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:29:32.633959    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:29:32.672461    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:29:32.672471    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:29:32.687382    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:29:32.687393    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:29:32.698918    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:29:32.698934    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:29:32.714766    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:29:32.714778    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:29:32.726439    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:29:32.726449    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:29:32.730588    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:29:32.730596    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:29:35.247500    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:29:40.250246    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:29:40.250784    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:29:40.290464    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:29:40.290617    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:29:40.312620    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:29:40.312765    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:29:40.328526    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:29:40.328607    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:29:40.341515    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:29:40.341605    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:29:40.356760    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:29:40.356841    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:29:40.367232    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:29:40.367301    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:29:40.377457    4788 logs.go:276] 0 containers: []
	W0919 12:29:40.377470    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:29:40.377540    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:29:40.388253    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:29:40.388271    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:29:40.388277    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:29:40.412171    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:29:40.412179    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:29:40.426910    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:29:40.426921    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:29:40.439201    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:29:40.439217    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:29:40.453487    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:29:40.453502    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:29:40.467545    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:29:40.467555    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:29:40.485103    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:29:40.485114    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:29:40.497043    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:29:40.497055    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:29:40.508916    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:29:40.508931    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:29:40.547414    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:29:40.547422    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:29:40.551718    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:29:40.551727    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:29:40.563267    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:29:40.563280    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:29:40.574884    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:29:40.574896    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:29:40.594105    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:29:40.594116    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:29:40.629333    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:29:40.629344    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:29:43.143466    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:29:48.145674    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:29:48.145781    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:29:48.157051    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:29:48.157132    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:29:48.171108    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:29:48.171187    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:29:48.185603    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:29:48.185672    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:29:48.202188    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:29:48.202261    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:29:48.213059    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:29:48.213137    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:29:48.226878    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:29:48.226940    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:29:48.238173    4788 logs.go:276] 0 containers: []
	W0919 12:29:48.238184    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:29:48.238241    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:29:48.248969    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:29:48.248987    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:29:48.248993    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:29:48.287757    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:29:48.287769    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:29:48.302412    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:29:48.302424    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:29:48.314553    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:29:48.314568    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:29:48.331041    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:29:48.331059    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:29:48.351672    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:29:48.351686    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:29:48.370885    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:29:48.370901    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:29:48.382853    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:29:48.382869    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:29:48.408513    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:29:48.408527    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:29:48.422244    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:29:48.422258    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:29:48.427322    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:29:48.427332    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:29:48.440270    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:29:48.440280    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:29:48.478620    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:29:48.478636    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:29:48.493652    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:29:48.493662    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:29:48.506375    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:29:48.506387    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:29:51.019383    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:29:56.021897    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:29:56.022161    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:29:56.041961    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:29:56.042072    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:29:56.056874    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:29:56.056962    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:29:56.071446    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:29:56.071524    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:29:56.084830    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:29:56.084911    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:29:56.095206    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:29:56.095279    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:29:56.105934    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:29:56.106021    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:29:56.121836    4788 logs.go:276] 0 containers: []
	W0919 12:29:56.121849    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:29:56.121924    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:29:56.133093    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:29:56.133113    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:29:56.133120    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:29:56.144719    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:29:56.144734    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:29:56.162455    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:29:56.162468    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:29:56.176510    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:29:56.176522    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:29:56.188064    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:29:56.188076    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:29:56.200558    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:29:56.200569    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:29:56.212241    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:29:56.212251    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:29:56.223834    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:29:56.223845    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:29:56.249403    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:29:56.249410    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:29:56.260721    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:29:56.260731    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:29:56.272226    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:29:56.272236    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:29:56.287455    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:29:56.287466    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:29:56.301150    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:29:56.301163    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:29:56.337245    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:29:56.337252    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:29:56.341298    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:29:56.341307    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:29:58.879297    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:30:03.881853    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:30:03.882103    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:30:03.903557    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:30:03.903688    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:30:03.919677    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:30:03.919785    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:30:03.932296    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:30:03.932391    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:30:03.943037    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:30:03.943118    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:30:03.953796    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:30:03.953883    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:30:03.964249    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:30:03.964330    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:30:03.993502    4788 logs.go:276] 0 containers: []
	W0919 12:30:03.993513    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:30:03.993581    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:30:04.019547    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:30:04.019564    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:30:04.019570    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:30:04.031056    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:30:04.031067    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:30:04.047101    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:30:04.047113    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:30:04.073048    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:30:04.073058    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:30:04.088605    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:30:04.088615    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:30:04.100513    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:30:04.100525    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:30:04.112363    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:30:04.112374    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:30:04.123961    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:30:04.123972    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:30:04.136525    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:30:04.136536    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:30:04.152208    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:30:04.152218    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:30:04.170692    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:30:04.170703    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:30:04.175301    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:30:04.175311    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:30:04.189853    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:30:04.189865    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:30:04.201645    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:30:04.201656    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:30:04.239283    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:30:04.239293    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:30:06.775657    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:30:11.776407    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:30:11.776505    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:30:11.787929    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:30:11.788030    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:30:11.799605    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:30:11.799673    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:30:11.811538    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:30:11.811633    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:30:11.825544    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:30:11.825638    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:30:11.836854    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:30:11.836922    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:30:11.850795    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:30:11.850878    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:30:11.862789    4788 logs.go:276] 0 containers: []
	W0919 12:30:11.862802    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:30:11.862869    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:30:11.874846    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:30:11.874866    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:30:11.874871    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:30:11.889959    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:30:11.889973    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:30:11.907884    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:30:11.907895    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:30:11.923859    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:30:11.923868    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:30:11.935469    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:30:11.935480    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:30:11.948132    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:30:11.948142    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:30:11.973434    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:30:11.973444    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:30:11.986027    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:30:11.986040    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:30:11.999396    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:30:11.999407    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:30:12.019094    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:30:12.019103    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:30:12.058186    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:30:12.058202    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:30:12.095041    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:30:12.095050    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:30:12.110461    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:30:12.110474    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:30:12.124057    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:30:12.124070    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:30:12.128749    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:30:12.128758    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:30:14.642770    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:30:19.645358    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:30:19.645673    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:30:19.677335    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:30:19.677481    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:30:19.693702    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:30:19.693812    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:30:19.706828    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:30:19.706917    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:30:19.722325    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:30:19.722401    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:30:19.732832    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:30:19.732923    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:30:19.743614    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:30:19.743694    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:30:19.753872    4788 logs.go:276] 0 containers: []
	W0919 12:30:19.753885    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:30:19.753950    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:30:19.764363    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:30:19.764378    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:30:19.764384    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:30:19.775787    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:30:19.775800    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:30:19.787149    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:30:19.787162    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:30:19.801161    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:30:19.801172    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:30:19.814978    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:30:19.814994    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:30:19.826075    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:30:19.826084    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:30:19.837768    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:30:19.837779    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:30:19.855614    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:30:19.855624    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:30:19.889735    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:30:19.889744    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:30:19.901551    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:30:19.901559    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:30:19.916762    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:30:19.916772    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:30:19.928314    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:30:19.928325    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:30:19.939845    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:30:19.939856    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:30:19.963435    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:30:19.963446    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:30:20.000876    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:30:20.000886    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:30:22.507313    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:30:27.509969    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:30:27.510267    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:30:27.543167    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:30:27.543296    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:30:27.558766    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:30:27.558866    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:30:27.571511    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:30:27.571601    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:30:27.582557    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:30:27.582642    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:30:27.593379    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:30:27.593459    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:30:27.603886    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:30:27.603959    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:30:27.614898    4788 logs.go:276] 0 containers: []
	W0919 12:30:27.614910    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:30:27.614980    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:30:27.625143    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:30:27.625163    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:30:27.625169    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:30:27.638018    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:30:27.638027    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:30:27.653185    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:30:27.653199    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:30:27.664813    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:30:27.664825    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:30:27.679481    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:30:27.679490    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:30:27.699412    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:30:27.699422    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:30:27.710553    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:30:27.710562    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:30:27.725868    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:30:27.725877    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:30:27.730830    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:30:27.730835    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:30:27.750567    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:30:27.750576    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:30:27.762238    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:30:27.762250    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:30:27.800463    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:30:27.800471    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:30:27.814664    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:30:27.814677    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:30:27.837927    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:30:27.837937    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:30:27.849803    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:30:27.849816    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:30:30.389115    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:30:35.391538    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:30:35.392134    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:30:35.429064    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:30:35.429202    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:30:35.453674    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:30:35.453789    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:30:35.469537    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:30:35.469630    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:30:35.482053    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:30:35.482145    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:30:35.496648    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:30:35.496723    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:30:35.507978    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:30:35.508069    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:30:35.518858    4788 logs.go:276] 0 containers: []
	W0919 12:30:35.518870    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:30:35.518940    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:30:35.529637    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:30:35.529654    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:30:35.529659    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:30:35.544410    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:30:35.544423    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:30:35.559814    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:30:35.559825    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:30:35.577630    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:30:35.577639    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:30:35.602556    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:30:35.602573    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:30:35.617435    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:30:35.617446    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:30:35.629852    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:30:35.629867    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:30:35.634627    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:30:35.634638    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:30:35.676102    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:30:35.676113    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:30:35.688880    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:30:35.688897    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:30:35.704657    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:30:35.704669    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:30:35.717625    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:30:35.717639    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:30:35.756125    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:30:35.756151    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:30:35.768581    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:30:35.768592    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:30:35.780922    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:30:35.780937    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:30:38.298314    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:30:43.300451    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:30:43.301001    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:30:43.339367    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:30:43.339535    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:30:43.364581    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:30:43.364698    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:30:43.379685    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:30:43.379789    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:30:43.392044    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:30:43.392125    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:30:43.402805    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:30:43.402882    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:30:43.413443    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:30:43.413519    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:30:43.423173    4788 logs.go:276] 0 containers: []
	W0919 12:30:43.423185    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:30:43.423255    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:30:43.433914    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:30:43.433931    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:30:43.433936    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:30:43.445340    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:30:43.445354    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:30:43.460734    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:30:43.460745    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:30:43.480801    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:30:43.480812    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:30:43.492384    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:30:43.492394    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:30:43.509708    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:30:43.509720    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:30:43.535200    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:30:43.535222    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:30:43.569135    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:30:43.569150    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:30:43.581116    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:30:43.581127    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:30:43.593279    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:30:43.593294    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:30:43.631058    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:30:43.631068    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:30:43.635295    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:30:43.635303    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:30:43.670786    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:30:43.670799    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:30:43.684930    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:30:43.684941    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:30:43.700143    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:30:43.700158    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:30:46.213381    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:30:51.216016    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:30:51.216277    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:30:51.250136    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:30:51.250327    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:30:51.273708    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:30:51.273847    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:30:51.289748    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:30:51.289873    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:30:51.303992    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:30:51.304081    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:30:51.314622    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:30:51.314753    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:30:51.325196    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:30:51.325286    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:30:51.335470    4788 logs.go:276] 0 containers: []
	W0919 12:30:51.335479    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:30:51.335548    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:30:51.346233    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:30:51.346258    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:30:51.346265    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:30:51.383063    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:30:51.383074    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:30:51.394810    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:30:51.394821    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:30:51.406327    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:30:51.406337    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:30:51.430680    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:30:51.430690    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:30:51.442101    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:30:51.442110    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:30:51.446869    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:30:51.446877    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:30:51.461643    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:30:51.461652    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:30:51.473405    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:30:51.473415    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:30:51.487258    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:30:51.487268    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:30:51.499223    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:30:51.499233    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:30:51.510965    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:30:51.510976    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:30:51.528316    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:30:51.528324    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:30:51.564497    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:30:51.564508    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:30:51.581500    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:30:51.581512    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:30:54.102807    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:30:59.105383    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:30:59.105900    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0919 12:30:59.143614    4788 logs.go:276] 1 containers: [56d59536372c]
	I0919 12:30:59.143797    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0919 12:30:59.163339    4788 logs.go:276] 1 containers: [d1c11e80a062]
	I0919 12:30:59.163485    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0919 12:30:59.178363    4788 logs.go:276] 4 containers: [4535ad6079a7 34d9b2a8b992 3590f2fec45b 54f3cd388f87]
	I0919 12:30:59.178476    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0919 12:30:59.190555    4788 logs.go:276] 1 containers: [4244dd55a07c]
	I0919 12:30:59.190659    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0919 12:30:59.201615    4788 logs.go:276] 1 containers: [fa9dbc304595]
	I0919 12:30:59.201697    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0919 12:30:59.212310    4788 logs.go:276] 1 containers: [2f841ea9a873]
	I0919 12:30:59.212401    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0919 12:30:59.222703    4788 logs.go:276] 0 containers: []
	W0919 12:30:59.222715    4788 logs.go:278] No container was found matching "kindnet"
	I0919 12:30:59.222800    4788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0919 12:30:59.233775    4788 logs.go:276] 1 containers: [bf1d9c652473]
	I0919 12:30:59.233797    4788 logs.go:123] Gathering logs for etcd [d1c11e80a062] ...
	I0919 12:30:59.233804    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1c11e80a062"
	I0919 12:30:59.248215    4788 logs.go:123] Gathering logs for coredns [3590f2fec45b] ...
	I0919 12:30:59.248226    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3590f2fec45b"
	I0919 12:30:59.259883    4788 logs.go:123] Gathering logs for kubelet ...
	I0919 12:30:59.259893    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 12:30:59.297312    4788 logs.go:123] Gathering logs for kube-apiserver [56d59536372c] ...
	I0919 12:30:59.297323    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d59536372c"
	I0919 12:30:59.312409    4788 logs.go:123] Gathering logs for kube-controller-manager [2f841ea9a873] ...
	I0919 12:30:59.312420    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f841ea9a873"
	I0919 12:30:59.329916    4788 logs.go:123] Gathering logs for coredns [54f3cd388f87] ...
	I0919 12:30:59.329928    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f3cd388f87"
	I0919 12:30:59.342330    4788 logs.go:123] Gathering logs for kube-scheduler [4244dd55a07c] ...
	I0919 12:30:59.342342    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4244dd55a07c"
	I0919 12:30:59.358603    4788 logs.go:123] Gathering logs for kube-proxy [fa9dbc304595] ...
	I0919 12:30:59.358614    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa9dbc304595"
	I0919 12:30:59.370877    4788 logs.go:123] Gathering logs for storage-provisioner [bf1d9c652473] ...
	I0919 12:30:59.370889    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1d9c652473"
	I0919 12:30:59.382615    4788 logs.go:123] Gathering logs for Docker ...
	I0919 12:30:59.382627    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0919 12:30:59.406990    4788 logs.go:123] Gathering logs for container status ...
	I0919 12:30:59.407000    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 12:30:59.419596    4788 logs.go:123] Gathering logs for coredns [4535ad6079a7] ...
	I0919 12:30:59.419607    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4535ad6079a7"
	I0919 12:30:59.435925    4788 logs.go:123] Gathering logs for coredns [34d9b2a8b992] ...
	I0919 12:30:59.435935    4788 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34d9b2a8b992"
	I0919 12:30:59.447932    4788 logs.go:123] Gathering logs for dmesg ...
	I0919 12:30:59.447943    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 12:30:59.452260    4788 logs.go:123] Gathering logs for describe nodes ...
	I0919 12:30:59.452268    4788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 12:31:01.988143    4788 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0919 12:31:06.990335    4788 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 12:31:07.006621    4788 out.go:201] 
	W0919 12:31:07.014873    4788 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0919 12:31:07.014908    4788 out.go:270] * 
	* 
	W0919 12:31:07.016423    4788 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:31:07.036883    4788 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-269000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.59s)

                                                
                                    
x
+
TestPause/serial/Start (10.06s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-468000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-468000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.987734708s)

                                                
                                                
-- stdout --
	* [pause-468000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-468000" primary control-plane node in "pause-468000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-468000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-468000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-468000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-468000 -n pause-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-468000 -n pause-468000: exit status 7 (66.621292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-468000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-562000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-562000 --driver=qemu2 : exit status 80 (9.860529125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-562000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-562000" primary control-plane node in "NoKubernetes-562000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-562000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-562000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-562000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000: exit status 7 (53.975667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-562000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --driver=qemu2 : exit status 80 (5.253648542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-562000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-562000
	* Restarting existing qemu2 VM for "NoKubernetes-562000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-562000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-562000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000: exit status 7 (55.074333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-562000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --driver=qemu2 : exit status 80 (5.257597917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-562000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-562000
	* Restarting existing qemu2 VM for "NoKubernetes-562000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-562000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-562000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000: exit status 7 (66.013167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-562000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-562000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-562000 --driver=qemu2 : exit status 80 (5.284239959s)

                                                
                                                
-- stdout --
	* [NoKubernetes-562000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-562000
	* Restarting existing qemu2 VM for "NoKubernetes-562000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-562000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-562000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-562000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000: exit status 7 (62.539417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-562000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.897197166s)

                                                
                                                
-- stdout --
	* [auto-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-342000" primary control-plane node in "auto-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:29:13.426808    5092 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:29:13.426935    5092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:29:13.426939    5092 out.go:358] Setting ErrFile to fd 2...
	I0919 12:29:13.426942    5092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:29:13.427071    5092 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:29:13.428144    5092 out.go:352] Setting JSON to false
	I0919 12:29:13.444624    5092 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3518,"bootTime":1726770635,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:29:13.444721    5092 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:29:13.449266    5092 out.go:177] * [auto-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:29:13.457021    5092 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:29:13.457069    5092 notify.go:220] Checking for updates...
	I0919 12:29:13.463981    5092 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:29:13.467052    5092 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:29:13.470080    5092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:29:13.472991    5092 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:29:13.476025    5092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:29:13.479487    5092 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:29:13.479550    5092 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:29:13.479596    5092 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:29:13.484965    5092 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:29:13.493119    5092 start.go:297] selected driver: qemu2
	I0919 12:29:13.493134    5092 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:29:13.493143    5092 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:29:13.495502    5092 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:29:13.498987    5092 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:29:13.502092    5092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:29:13.502111    5092 cni.go:84] Creating CNI manager for ""
	I0919 12:29:13.502137    5092 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:29:13.502144    5092 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 12:29:13.502178    5092 start.go:340] cluster config:
	{Name:auto-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:29:13.505719    5092 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:29:13.511002    5092 out.go:177] * Starting "auto-342000" primary control-plane node in "auto-342000" cluster
	I0919 12:29:13.515071    5092 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:29:13.515084    5092 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:29:13.515091    5092 cache.go:56] Caching tarball of preloaded images
	I0919 12:29:13.515151    5092 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:29:13.515156    5092 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:29:13.515207    5092 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/auto-342000/config.json ...
	I0919 12:29:13.515217    5092 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/auto-342000/config.json: {Name:mkb93a2771a402464b37f9141f7cb4653370460c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:29:13.515627    5092 start.go:360] acquireMachinesLock for auto-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:29:13.515660    5092 start.go:364] duration metric: took 24.167µs to acquireMachinesLock for "auto-342000"
	I0919 12:29:13.515670    5092 start.go:93] Provisioning new machine with config: &{Name:auto-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:29:13.515695    5092 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:29:13.523043    5092 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:29:13.538326    5092 start.go:159] libmachine.API.Create for "auto-342000" (driver="qemu2")
	I0919 12:29:13.538358    5092 client.go:168] LocalClient.Create starting
	I0919 12:29:13.538429    5092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:29:13.538460    5092 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:13.538472    5092 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:13.538509    5092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:29:13.538532    5092 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:13.538540    5092 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:13.538964    5092 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:29:13.700582    5092 main.go:141] libmachine: Creating SSH key...
	I0919 12:29:13.844457    5092 main.go:141] libmachine: Creating Disk image...
	I0919 12:29:13.844465    5092 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:29:13.844676    5092 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/disk.qcow2
	I0919 12:29:13.854335    5092 main.go:141] libmachine: STDOUT: 
	I0919 12:29:13.854354    5092 main.go:141] libmachine: STDERR: 
	I0919 12:29:13.854423    5092 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/disk.qcow2 +20000M
	I0919 12:29:13.862334    5092 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:29:13.862357    5092 main.go:141] libmachine: STDERR: 
	I0919 12:29:13.862378    5092 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/disk.qcow2
	I0919 12:29:13.862385    5092 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:29:13.862400    5092 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:29:13.862450    5092 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:a3:51:06:06:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/disk.qcow2
	I0919 12:29:13.864154    5092 main.go:141] libmachine: STDOUT: 
	I0919 12:29:13.864175    5092 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:29:13.864198    5092 client.go:171] duration metric: took 325.840167ms to LocalClient.Create
	I0919 12:29:15.866249    5092 start.go:128] duration metric: took 2.350613792s to createHost
	I0919 12:29:15.866293    5092 start.go:83] releasing machines lock for "auto-342000", held for 2.350699625s
	W0919 12:29:15.866323    5092 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:29:15.874352    5092 out.go:177] * Deleting "auto-342000" in qemu2 ...
	W0919 12:29:15.903488    5092 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:29:15.903505    5092 start.go:729] Will try again in 5 seconds ...
	I0919 12:29:20.905539    5092 start.go:360] acquireMachinesLock for auto-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:29:20.905767    5092 start.go:364] duration metric: took 188.667µs to acquireMachinesLock for "auto-342000"
	I0919 12:29:20.905792    5092 start.go:93] Provisioning new machine with config: &{Name:auto-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:29:20.905867    5092 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:29:20.913154    5092 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:29:20.937266    5092 start.go:159] libmachine.API.Create for "auto-342000" (driver="qemu2")
	I0919 12:29:20.937297    5092 client.go:168] LocalClient.Create starting
	I0919 12:29:20.937399    5092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:29:20.937438    5092 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:20.937448    5092 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:20.937487    5092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:29:20.937515    5092 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:20.937528    5092 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:20.937873    5092 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:29:21.099754    5092 main.go:141] libmachine: Creating SSH key...
	I0919 12:29:21.222039    5092 main.go:141] libmachine: Creating Disk image...
	I0919 12:29:21.222046    5092 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:29:21.222235    5092 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/disk.qcow2
	I0919 12:29:21.231614    5092 main.go:141] libmachine: STDOUT: 
	I0919 12:29:21.231633    5092 main.go:141] libmachine: STDERR: 
	I0919 12:29:21.231689    5092 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/disk.qcow2 +20000M
	I0919 12:29:21.239552    5092 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:29:21.239566    5092 main.go:141] libmachine: STDERR: 
	I0919 12:29:21.239579    5092 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/disk.qcow2
	I0919 12:29:21.239588    5092 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:29:21.239595    5092 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:29:21.239634    5092 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:34:d3:df:ad:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/auto-342000/disk.qcow2
	I0919 12:29:21.241338    5092 main.go:141] libmachine: STDOUT: 
	I0919 12:29:21.241362    5092 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:29:21.241374    5092 client.go:171] duration metric: took 304.081917ms to LocalClient.Create
	I0919 12:29:23.243521    5092 start.go:128] duration metric: took 2.33769075s to createHost
	I0919 12:29:23.243675    5092 start.go:83] releasing machines lock for "auto-342000", held for 2.337943916s
	W0919 12:29:23.244070    5092 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:29:23.260869    5092 out.go:201] 
	W0919 12:29:23.264830    5092 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:29:23.264854    5092 out.go:270] * 
	* 
	W0919 12:29:23.267501    5092 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:29:23.281699    5092 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.810093292s)

                                                
                                                
-- stdout --
	* [kindnet-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-342000" primary control-plane node in "kindnet-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:29:25.500396    5201 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:29:25.500546    5201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:29:25.500552    5201 out.go:358] Setting ErrFile to fd 2...
	I0919 12:29:25.500554    5201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:29:25.500697    5201 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:29:25.501876    5201 out.go:352] Setting JSON to false
	I0919 12:29:25.518807    5201 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3530,"bootTime":1726770635,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:29:25.518876    5201 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:29:25.525059    5201 out.go:177] * [kindnet-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:29:25.532970    5201 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:29:25.533092    5201 notify.go:220] Checking for updates...
	I0919 12:29:25.539863    5201 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:29:25.542915    5201 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:29:25.545822    5201 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:29:25.548862    5201 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:29:25.551955    5201 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:29:25.555219    5201 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:29:25.555287    5201 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:29:25.555327    5201 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:29:25.559966    5201 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:29:25.565880    5201 start.go:297] selected driver: qemu2
	I0919 12:29:25.565887    5201 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:29:25.565893    5201 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:29:25.568133    5201 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:29:25.570938    5201 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:29:25.573989    5201 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:29:25.574005    5201 cni.go:84] Creating CNI manager for "kindnet"
	I0919 12:29:25.574008    5201 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 12:29:25.574035    5201 start.go:340] cluster config:
	{Name:kindnet-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:29:25.577444    5201 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:29:25.584832    5201 out.go:177] * Starting "kindnet-342000" primary control-plane node in "kindnet-342000" cluster
	I0919 12:29:25.588889    5201 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:29:25.588903    5201 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:29:25.588910    5201 cache.go:56] Caching tarball of preloaded images
	I0919 12:29:25.588966    5201 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:29:25.588971    5201 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:29:25.589030    5201 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/kindnet-342000/config.json ...
	I0919 12:29:25.589041    5201 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/kindnet-342000/config.json: {Name:mk3ec537a35f26f26aed07cf522d86a268fa84c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:29:25.589240    5201 start.go:360] acquireMachinesLock for kindnet-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:29:25.589270    5201 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "kindnet-342000"
	I0919 12:29:25.589279    5201 start.go:93] Provisioning new machine with config: &{Name:kindnet-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:29:25.589301    5201 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:29:25.596907    5201 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:29:25.612343    5201 start.go:159] libmachine.API.Create for "kindnet-342000" (driver="qemu2")
	I0919 12:29:25.612370    5201 client.go:168] LocalClient.Create starting
	I0919 12:29:25.612429    5201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:29:25.612459    5201 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:25.612467    5201 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:25.612509    5201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:29:25.612534    5201 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:25.612541    5201 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:25.612880    5201 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:29:25.772467    5201 main.go:141] libmachine: Creating SSH key...
	I0919 12:29:25.853961    5201 main.go:141] libmachine: Creating Disk image...
	I0919 12:29:25.853969    5201 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:29:25.854171    5201 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/disk.qcow2
	I0919 12:29:25.863586    5201 main.go:141] libmachine: STDOUT: 
	I0919 12:29:25.863608    5201 main.go:141] libmachine: STDERR: 
	I0919 12:29:25.863689    5201 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/disk.qcow2 +20000M
	I0919 12:29:25.872997    5201 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:29:25.873032    5201 main.go:141] libmachine: STDERR: 
	I0919 12:29:25.873053    5201 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/disk.qcow2
	I0919 12:29:25.873058    5201 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:29:25.873068    5201 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:29:25.873099    5201 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:0e:bd:25:8a:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/disk.qcow2
	I0919 12:29:25.875136    5201 main.go:141] libmachine: STDOUT: 
	I0919 12:29:25.875151    5201 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:29:25.875174    5201 client.go:171] duration metric: took 262.804333ms to LocalClient.Create
	I0919 12:29:27.877243    5201 start.go:128] duration metric: took 2.287987791s to createHost
	I0919 12:29:27.877278    5201 start.go:83] releasing machines lock for "kindnet-342000", held for 2.288074667s
	W0919 12:29:27.877302    5201 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:29:27.896175    5201 out.go:177] * Deleting "kindnet-342000" in qemu2 ...
	W0919 12:29:27.913299    5201 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:29:27.913308    5201 start.go:729] Will try again in 5 seconds ...
	I0919 12:29:32.915278    5201 start.go:360] acquireMachinesLock for kindnet-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:29:32.915442    5201 start.go:364] duration metric: took 132.542µs to acquireMachinesLock for "kindnet-342000"
	I0919 12:29:32.915462    5201 start.go:93] Provisioning new machine with config: &{Name:kindnet-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:29:32.915545    5201 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:29:32.925451    5201 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:29:32.944094    5201 start.go:159] libmachine.API.Create for "kindnet-342000" (driver="qemu2")
	I0919 12:29:32.944126    5201 client.go:168] LocalClient.Create starting
	I0919 12:29:32.944199    5201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:29:32.944235    5201 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:32.944246    5201 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:32.944279    5201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:29:32.944304    5201 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:32.944310    5201 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:32.944705    5201 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:29:33.103092    5201 main.go:141] libmachine: Creating SSH key...
	I0919 12:29:33.215427    5201 main.go:141] libmachine: Creating Disk image...
	I0919 12:29:33.215434    5201 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:29:33.215645    5201 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/disk.qcow2
	I0919 12:29:33.224880    5201 main.go:141] libmachine: STDOUT: 
	I0919 12:29:33.224899    5201 main.go:141] libmachine: STDERR: 
	I0919 12:29:33.224959    5201 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/disk.qcow2 +20000M
	I0919 12:29:33.232732    5201 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:29:33.232747    5201 main.go:141] libmachine: STDERR: 
	I0919 12:29:33.232762    5201 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/disk.qcow2
	I0919 12:29:33.232768    5201 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:29:33.232781    5201 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:29:33.232814    5201 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a7:eb:2c:ca:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kindnet-342000/disk.qcow2
	I0919 12:29:33.234440    5201 main.go:141] libmachine: STDOUT: 
	I0919 12:29:33.234454    5201 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:29:33.234466    5201 client.go:171] duration metric: took 290.343709ms to LocalClient.Create
	I0919 12:29:35.236626    5201 start.go:128] duration metric: took 2.321114333s to createHost
	I0919 12:29:35.236704    5201 start.go:83] releasing machines lock for "kindnet-342000", held for 2.321320542s
	W0919 12:29:35.237236    5201 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:29:35.246033    5201 out.go:201] 
	W0919 12:29:35.255079    5201 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:29:35.255123    5201 out.go:270] * 
	* 
	W0919 12:29:35.258062    5201 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:29:35.267951    5201 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.773006416s)

                                                
                                                
-- stdout --
	* [flannel-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-342000" primary control-plane node in "flannel-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:29:37.547189    5317 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:29:37.547316    5317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:29:37.547320    5317 out.go:358] Setting ErrFile to fd 2...
	I0919 12:29:37.547322    5317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:29:37.547445    5317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:29:37.548781    5317 out.go:352] Setting JSON to false
	I0919 12:29:37.565554    5317 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3542,"bootTime":1726770635,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:29:37.565627    5317 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:29:37.574213    5317 out.go:177] * [flannel-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:29:37.582110    5317 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:29:37.582157    5317 notify.go:220] Checking for updates...
	I0919 12:29:37.590235    5317 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:29:37.593202    5317 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:29:37.596244    5317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:29:37.599251    5317 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:29:37.602214    5317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:29:37.605522    5317 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:29:37.605588    5317 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:29:37.605640    5317 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:29:37.609240    5317 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:29:37.616172    5317 start.go:297] selected driver: qemu2
	I0919 12:29:37.616179    5317 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:29:37.616185    5317 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:29:37.618343    5317 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:29:37.621196    5317 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:29:37.622607    5317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:29:37.622623    5317 cni.go:84] Creating CNI manager for "flannel"
	I0919 12:29:37.622628    5317 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0919 12:29:37.622656    5317 start.go:340] cluster config:
	{Name:flannel-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:29:37.626122    5317 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:29:37.633187    5317 out.go:177] * Starting "flannel-342000" primary control-plane node in "flannel-342000" cluster
	I0919 12:29:37.637157    5317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:29:37.637182    5317 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:29:37.637192    5317 cache.go:56] Caching tarball of preloaded images
	I0919 12:29:37.637254    5317 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:29:37.637260    5317 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:29:37.637320    5317 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/flannel-342000/config.json ...
	I0919 12:29:37.637331    5317 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/flannel-342000/config.json: {Name:mkd77fae247af3f5e33cd2079878b133fb644eb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:29:37.637550    5317 start.go:360] acquireMachinesLock for flannel-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:29:37.637588    5317 start.go:364] duration metric: took 30.041µs to acquireMachinesLock for "flannel-342000"
	I0919 12:29:37.637599    5317 start.go:93] Provisioning new machine with config: &{Name:flannel-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:29:37.637628    5317 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:29:37.645111    5317 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:29:37.661778    5317 start.go:159] libmachine.API.Create for "flannel-342000" (driver="qemu2")
	I0919 12:29:37.661812    5317 client.go:168] LocalClient.Create starting
	I0919 12:29:37.661887    5317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:29:37.661920    5317 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:37.661930    5317 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:37.661970    5317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:29:37.661993    5317 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:37.662000    5317 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:37.662369    5317 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:29:37.822171    5317 main.go:141] libmachine: Creating SSH key...
	I0919 12:29:37.864948    5317 main.go:141] libmachine: Creating Disk image...
	I0919 12:29:37.864953    5317 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:29:37.865140    5317 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/disk.qcow2
	I0919 12:29:37.874410    5317 main.go:141] libmachine: STDOUT: 
	I0919 12:29:37.874426    5317 main.go:141] libmachine: STDERR: 
	I0919 12:29:37.874479    5317 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/disk.qcow2 +20000M
	I0919 12:29:37.882347    5317 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:29:37.882363    5317 main.go:141] libmachine: STDERR: 
	I0919 12:29:37.882377    5317 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/disk.qcow2
	I0919 12:29:37.882381    5317 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:29:37.882394    5317 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:29:37.882421    5317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:96:16:6c:8d:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/disk.qcow2
	I0919 12:29:37.884012    5317 main.go:141] libmachine: STDOUT: 
	I0919 12:29:37.884026    5317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:29:37.884049    5317 client.go:171] duration metric: took 222.238208ms to LocalClient.Create
	I0919 12:29:39.886193    5317 start.go:128] duration metric: took 2.24860275s to createHost
	I0919 12:29:39.886304    5317 start.go:83] releasing machines lock for "flannel-342000", held for 2.248770125s
	W0919 12:29:39.886363    5317 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:29:39.892634    5317 out.go:177] * Deleting "flannel-342000" in qemu2 ...
	W0919 12:29:39.925045    5317 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:29:39.925077    5317 start.go:729] Will try again in 5 seconds ...
	I0919 12:29:44.926750    5317 start.go:360] acquireMachinesLock for flannel-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:29:44.927308    5317 start.go:364] duration metric: took 450.5µs to acquireMachinesLock for "flannel-342000"
	I0919 12:29:44.927453    5317 start.go:93] Provisioning new machine with config: &{Name:flannel-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:29:44.927813    5317 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:29:44.935615    5317 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:29:44.984412    5317 start.go:159] libmachine.API.Create for "flannel-342000" (driver="qemu2")
	I0919 12:29:44.984474    5317 client.go:168] LocalClient.Create starting
	I0919 12:29:44.984610    5317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:29:44.984672    5317 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:44.984689    5317 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:44.984767    5317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:29:44.984813    5317 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:44.984836    5317 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:44.985494    5317 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:29:45.155499    5317 main.go:141] libmachine: Creating SSH key...
	I0919 12:29:45.218968    5317 main.go:141] libmachine: Creating Disk image...
	I0919 12:29:45.218979    5317 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:29:45.219167    5317 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/disk.qcow2
	I0919 12:29:45.229781    5317 main.go:141] libmachine: STDOUT: 
	I0919 12:29:45.229802    5317 main.go:141] libmachine: STDERR: 
	I0919 12:29:45.229888    5317 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/disk.qcow2 +20000M
	I0919 12:29:45.238269    5317 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:29:45.238293    5317 main.go:141] libmachine: STDERR: 
	I0919 12:29:45.238307    5317 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/disk.qcow2
	I0919 12:29:45.238312    5317 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:29:45.238318    5317 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:29:45.238350    5317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:7b:95:1b:de:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/flannel-342000/disk.qcow2
	I0919 12:29:45.240011    5317 main.go:141] libmachine: STDOUT: 
	I0919 12:29:45.240026    5317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:29:45.240039    5317 client.go:171] duration metric: took 255.568583ms to LocalClient.Create
	I0919 12:29:47.242223    5317 start.go:128] duration metric: took 2.314416834s to createHost
	I0919 12:29:47.242335    5317 start.go:83] releasing machines lock for "flannel-342000", held for 2.3150725s
	W0919 12:29:47.242759    5317 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:29:47.259504    5317 out.go:201] 
	W0919 12:29:47.264354    5317 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:29:47.264386    5317 out.go:270] * 
	* 
	W0919 12:29:47.267013    5317 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:29:47.277410    5317 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
E0919 12:29:59.112070    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.966761584s)

                                                
                                                
-- stdout --
	* [enable-default-cni-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-342000" primary control-plane node in "enable-default-cni-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:29:49.747271    5442 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:29:49.747410    5442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:29:49.747414    5442 out.go:358] Setting ErrFile to fd 2...
	I0919 12:29:49.747416    5442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:29:49.747572    5442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:29:49.748638    5442 out.go:352] Setting JSON to false
	I0919 12:29:49.765954    5442 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3554,"bootTime":1726770635,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:29:49.766022    5442 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:29:49.772008    5442 out.go:177] * [enable-default-cni-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:29:49.778948    5442 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:29:49.779014    5442 notify.go:220] Checking for updates...
	I0919 12:29:49.785868    5442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:29:49.788901    5442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:29:49.791866    5442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:29:49.794834    5442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:29:49.797881    5442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:29:49.801124    5442 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:29:49.801193    5442 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:29:49.801241    5442 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:29:49.805781    5442 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:29:49.812793    5442 start.go:297] selected driver: qemu2
	I0919 12:29:49.812802    5442 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:29:49.812811    5442 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:29:49.815043    5442 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:29:49.817897    5442 out.go:177] * Automatically selected the socket_vmnet network
	E0919 12:29:49.820926    5442 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0919 12:29:49.820939    5442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:29:49.820957    5442 cni.go:84] Creating CNI manager for "bridge"
	I0919 12:29:49.820962    5442 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 12:29:49.820999    5442 start.go:340] cluster config:
	{Name:enable-default-cni-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:29:49.824753    5442 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:29:49.832849    5442 out.go:177] * Starting "enable-default-cni-342000" primary control-plane node in "enable-default-cni-342000" cluster
	I0919 12:29:49.836678    5442 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:29:49.836696    5442 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:29:49.836703    5442 cache.go:56] Caching tarball of preloaded images
	I0919 12:29:49.836780    5442 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:29:49.836786    5442 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:29:49.836855    5442 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/enable-default-cni-342000/config.json ...
	I0919 12:29:49.836868    5442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/enable-default-cni-342000/config.json: {Name:mkf6993adb5f7c14f6222e883414ab1d2750c4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:29:49.837228    5442 start.go:360] acquireMachinesLock for enable-default-cni-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:29:49.837284    5442 start.go:364] duration metric: took 44µs to acquireMachinesLock for "enable-default-cni-342000"
	I0919 12:29:49.837297    5442 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:29:49.837331    5442 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:29:49.844815    5442 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:29:49.862590    5442 start.go:159] libmachine.API.Create for "enable-default-cni-342000" (driver="qemu2")
	I0919 12:29:49.862636    5442 client.go:168] LocalClient.Create starting
	I0919 12:29:49.862701    5442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:29:49.862746    5442 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:49.862758    5442 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:49.862796    5442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:29:49.862820    5442 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:49.862826    5442 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:49.863217    5442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:29:50.023081    5442 main.go:141] libmachine: Creating SSH key...
	I0919 12:29:50.165963    5442 main.go:141] libmachine: Creating Disk image...
	I0919 12:29:50.165970    5442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:29:50.166169    5442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/disk.qcow2
	I0919 12:29:50.175758    5442 main.go:141] libmachine: STDOUT: 
	I0919 12:29:50.175779    5442 main.go:141] libmachine: STDERR: 
	I0919 12:29:50.175838    5442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/disk.qcow2 +20000M
	I0919 12:29:50.184071    5442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:29:50.184089    5442 main.go:141] libmachine: STDERR: 
	I0919 12:29:50.184109    5442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/disk.qcow2
	I0919 12:29:50.184117    5442 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:29:50.184130    5442 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:29:50.184163    5442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:97:0a:04:50:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/disk.qcow2
	I0919 12:29:50.185844    5442 main.go:141] libmachine: STDOUT: 
	I0919 12:29:50.185858    5442 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:29:50.185883    5442 client.go:171] duration metric: took 323.249708ms to LocalClient.Create
	I0919 12:29:52.188054    5442 start.go:128] duration metric: took 2.350755625s to createHost
	I0919 12:29:52.188196    5442 start.go:83] releasing machines lock for "enable-default-cni-342000", held for 2.350959s
	W0919 12:29:52.188290    5442 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:29:52.204178    5442 out.go:177] * Deleting "enable-default-cni-342000" in qemu2 ...
	W0919 12:29:52.238222    5442 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:29:52.238246    5442 start.go:729] Will try again in 5 seconds ...
	I0919 12:29:57.240311    5442 start.go:360] acquireMachinesLock for enable-default-cni-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:29:57.240639    5442 start.go:364] duration metric: took 259.291µs to acquireMachinesLock for "enable-default-cni-342000"
	I0919 12:29:57.240740    5442 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:29:57.240862    5442 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:29:57.249221    5442 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:29:57.283211    5442 start.go:159] libmachine.API.Create for "enable-default-cni-342000" (driver="qemu2")
	I0919 12:29:57.283270    5442 client.go:168] LocalClient.Create starting
	I0919 12:29:57.283396    5442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:29:57.283459    5442 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:57.283476    5442 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:57.283532    5442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:29:57.283572    5442 main.go:141] libmachine: Decoding PEM data...
	I0919 12:29:57.283587    5442 main.go:141] libmachine: Parsing certificate...
	I0919 12:29:57.284044    5442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:29:57.449999    5442 main.go:141] libmachine: Creating SSH key...
	I0919 12:29:57.614629    5442 main.go:141] libmachine: Creating Disk image...
	I0919 12:29:57.614639    5442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:29:57.614820    5442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/disk.qcow2
	I0919 12:29:57.624114    5442 main.go:141] libmachine: STDOUT: 
	I0919 12:29:57.624134    5442 main.go:141] libmachine: STDERR: 
	I0919 12:29:57.624208    5442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/disk.qcow2 +20000M
	I0919 12:29:57.632016    5442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:29:57.632032    5442 main.go:141] libmachine: STDERR: 
	I0919 12:29:57.632050    5442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/disk.qcow2
	I0919 12:29:57.632055    5442 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:29:57.632065    5442 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:29:57.632100    5442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:3a:d2:3e:1c:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/enable-default-cni-342000/disk.qcow2
	I0919 12:29:57.633841    5442 main.go:141] libmachine: STDOUT: 
	I0919 12:29:57.633855    5442 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:29:57.633869    5442 client.go:171] duration metric: took 350.605209ms to LocalClient.Create
	I0919 12:29:59.636021    5442 start.go:128] duration metric: took 2.395192958s to createHost
	I0919 12:29:59.636118    5442 start.go:83] releasing machines lock for "enable-default-cni-342000", held for 2.395536875s
	W0919 12:29:59.636517    5442 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:29:59.646196    5442 out.go:201] 
	W0919 12:29:59.659254    5442 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:29:59.659327    5442 out.go:270] * 
	* 
	W0919 12:29:59.662136    5442 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:29:59.669926    5442 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.722159084s)

                                                
                                                
-- stdout --
	* [bridge-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-342000" primary control-plane node in "bridge-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:30:01.880198    5708 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:30:01.880343    5708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:30:01.880346    5708 out.go:358] Setting ErrFile to fd 2...
	I0919 12:30:01.880349    5708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:30:01.880461    5708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:30:01.881517    5708 out.go:352] Setting JSON to false
	I0919 12:30:01.897998    5708 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3566,"bootTime":1726770635,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:30:01.898077    5708 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:30:01.903976    5708 out.go:177] * [bridge-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:30:01.910810    5708 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:30:01.910847    5708 notify.go:220] Checking for updates...
	I0919 12:30:01.919517    5708 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:30:01.922714    5708 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:30:01.925766    5708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:30:01.928749    5708 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:30:01.930024    5708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:30:01.933215    5708 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:30:01.933281    5708 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:30:01.933336    5708 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:30:01.937779    5708 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:30:01.942845    5708 start.go:297] selected driver: qemu2
	I0919 12:30:01.942853    5708 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:30:01.942859    5708 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:30:01.945313    5708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:30:01.948784    5708 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:30:01.951891    5708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:30:01.951908    5708 cni.go:84] Creating CNI manager for "bridge"
	I0919 12:30:01.951916    5708 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 12:30:01.951941    5708 start.go:340] cluster config:
	{Name:bridge-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:30:01.955860    5708 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:30:01.962872    5708 out.go:177] * Starting "bridge-342000" primary control-plane node in "bridge-342000" cluster
	I0919 12:30:01.966797    5708 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:30:01.966810    5708 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:30:01.966816    5708 cache.go:56] Caching tarball of preloaded images
	I0919 12:30:01.966868    5708 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:30:01.966874    5708 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:30:01.966934    5708 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/bridge-342000/config.json ...
	I0919 12:30:01.966945    5708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/bridge-342000/config.json: {Name:mkfcbe558e4f9b84b94a5d3064e4197176a8eacc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:30:01.967236    5708 start.go:360] acquireMachinesLock for bridge-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:30:01.967269    5708 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "bridge-342000"
	I0919 12:30:01.967279    5708 start.go:93] Provisioning new machine with config: &{Name:bridge-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:30:01.967304    5708 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:30:01.973691    5708 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:30:01.990703    5708 start.go:159] libmachine.API.Create for "bridge-342000" (driver="qemu2")
	I0919 12:30:01.990731    5708 client.go:168] LocalClient.Create starting
	I0919 12:30:01.990795    5708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:30:01.990830    5708 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:01.990838    5708 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:01.990871    5708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:30:01.990896    5708 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:01.990902    5708 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:01.991300    5708 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:30:02.150837    5708 main.go:141] libmachine: Creating SSH key...
	I0919 12:30:02.237286    5708 main.go:141] libmachine: Creating Disk image...
	I0919 12:30:02.237293    5708 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:30:02.237477    5708 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/disk.qcow2
	I0919 12:30:02.246625    5708 main.go:141] libmachine: STDOUT: 
	I0919 12:30:02.246641    5708 main.go:141] libmachine: STDERR: 
	I0919 12:30:02.246710    5708 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/disk.qcow2 +20000M
	I0919 12:30:02.254707    5708 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:30:02.254734    5708 main.go:141] libmachine: STDERR: 
	I0919 12:30:02.254753    5708 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/disk.qcow2
	I0919 12:30:02.254760    5708 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:30:02.254771    5708 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:30:02.254806    5708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:16:3e:68:58:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/disk.qcow2
	I0919 12:30:02.256418    5708 main.go:141] libmachine: STDOUT: 
	I0919 12:30:02.256433    5708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:30:02.256457    5708 client.go:171] duration metric: took 265.725291ms to LocalClient.Create
	I0919 12:30:04.258459    5708 start.go:128] duration metric: took 2.291219s to createHost
	I0919 12:30:04.258484    5708 start.go:83] releasing machines lock for "bridge-342000", held for 2.291282167s
	W0919 12:30:04.258499    5708 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:04.267641    5708 out.go:177] * Deleting "bridge-342000" in qemu2 ...
	W0919 12:30:04.280124    5708 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:04.280134    5708 start.go:729] Will try again in 5 seconds ...
	I0919 12:30:09.282116    5708 start.go:360] acquireMachinesLock for bridge-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:30:09.282322    5708 start.go:364] duration metric: took 164.916µs to acquireMachinesLock for "bridge-342000"
	I0919 12:30:09.282369    5708 start.go:93] Provisioning new machine with config: &{Name:bridge-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:30:09.282476    5708 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:30:09.286775    5708 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:30:09.308892    5708 start.go:159] libmachine.API.Create for "bridge-342000" (driver="qemu2")
	I0919 12:30:09.308927    5708 client.go:168] LocalClient.Create starting
	I0919 12:30:09.308998    5708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:30:09.309036    5708 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:09.309048    5708 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:09.309093    5708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:30:09.309121    5708 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:09.309128    5708 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:09.309496    5708 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:30:09.468877    5708 main.go:141] libmachine: Creating SSH key...
	I0919 12:30:09.511505    5708 main.go:141] libmachine: Creating Disk image...
	I0919 12:30:09.511512    5708 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:30:09.511697    5708 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/disk.qcow2
	I0919 12:30:09.520791    5708 main.go:141] libmachine: STDOUT: 
	I0919 12:30:09.520807    5708 main.go:141] libmachine: STDERR: 
	I0919 12:30:09.520869    5708 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/disk.qcow2 +20000M
	I0919 12:30:09.528753    5708 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:30:09.528768    5708 main.go:141] libmachine: STDERR: 
	I0919 12:30:09.528778    5708 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/disk.qcow2
	I0919 12:30:09.528785    5708 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:30:09.528793    5708 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:30:09.528829    5708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:0a:e7:81:ce:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/bridge-342000/disk.qcow2
	I0919 12:30:09.530493    5708 main.go:141] libmachine: STDOUT: 
	I0919 12:30:09.530508    5708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:30:09.530520    5708 client.go:171] duration metric: took 221.594792ms to LocalClient.Create
	I0919 12:30:11.532707    5708 start.go:128] duration metric: took 2.250268708s to createHost
	I0919 12:30:11.532797    5708 start.go:83] releasing machines lock for "bridge-342000", held for 2.250530083s
	W0919 12:30:11.533134    5708 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:11.541723    5708 out.go:201] 
	W0919 12:30:11.551870    5708 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:30:11.551921    5708 out.go:270] * 
	* 
	W0919 12:30:11.553333    5708 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:30:11.563786    5708 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.825520208s)

                                                
                                                
-- stdout --
	* [kubenet-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-342000" primary control-plane node in "kubenet-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:30:13.825623    5991 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:30:13.825745    5991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:30:13.825752    5991 out.go:358] Setting ErrFile to fd 2...
	I0919 12:30:13.825755    5991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:30:13.825893    5991 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:30:13.826952    5991 out.go:352] Setting JSON to false
	I0919 12:30:13.844023    5991 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3578,"bootTime":1726770635,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:30:13.844092    5991 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:30:13.850874    5991 out.go:177] * [kubenet-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:30:13.861929    5991 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:30:13.862014    5991 notify.go:220] Checking for updates...
	I0919 12:30:13.868783    5991 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:30:13.871837    5991 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:30:13.874870    5991 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:30:13.877848    5991 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:30:13.880791    5991 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:30:13.884200    5991 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:30:13.884265    5991 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:30:13.884317    5991 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:30:13.888827    5991 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:30:13.895770    5991 start.go:297] selected driver: qemu2
	I0919 12:30:13.895780    5991 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:30:13.895787    5991 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:30:13.898116    5991 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:30:13.901834    5991 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:30:13.904941    5991 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:30:13.904967    5991 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0919 12:30:13.905017    5991 start.go:340] cluster config:
	{Name:kubenet-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:30:13.909095    5991 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:30:13.916824    5991 out.go:177] * Starting "kubenet-342000" primary control-plane node in "kubenet-342000" cluster
	I0919 12:30:13.920665    5991 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:30:13.920680    5991 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:30:13.920688    5991 cache.go:56] Caching tarball of preloaded images
	I0919 12:30:13.920746    5991 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:30:13.920752    5991 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:30:13.920812    5991 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/kubenet-342000/config.json ...
	I0919 12:30:13.920824    5991 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/kubenet-342000/config.json: {Name:mk2479560eb7707f84df11a1e9a16943a0439d66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:30:13.921239    5991 start.go:360] acquireMachinesLock for kubenet-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:30:13.921274    5991 start.go:364] duration metric: took 28µs to acquireMachinesLock for "kubenet-342000"
	I0919 12:30:13.921285    5991 start.go:93] Provisioning new machine with config: &{Name:kubenet-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:30:13.921314    5991 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:30:13.929688    5991 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:30:13.946743    5991 start.go:159] libmachine.API.Create for "kubenet-342000" (driver="qemu2")
	I0919 12:30:13.946777    5991 client.go:168] LocalClient.Create starting
	I0919 12:30:13.946839    5991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:30:13.946870    5991 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:13.946883    5991 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:13.946922    5991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:30:13.946946    5991 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:13.946954    5991 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:13.947372    5991 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:30:14.102770    5991 main.go:141] libmachine: Creating SSH key...
	I0919 12:30:14.166447    5991 main.go:141] libmachine: Creating Disk image...
	I0919 12:30:14.166456    5991 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:30:14.166669    5991 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/disk.qcow2
	I0919 12:30:14.176055    5991 main.go:141] libmachine: STDOUT: 
	I0919 12:30:14.176080    5991 main.go:141] libmachine: STDERR: 
	I0919 12:30:14.176135    5991 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/disk.qcow2 +20000M
	I0919 12:30:14.184518    5991 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:30:14.184538    5991 main.go:141] libmachine: STDERR: 
	I0919 12:30:14.184557    5991 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/disk.qcow2
	I0919 12:30:14.184564    5991 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:30:14.184577    5991 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:30:14.184608    5991 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:2e:4c:a1:00:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/disk.qcow2
	I0919 12:30:14.186323    5991 main.go:141] libmachine: STDOUT: 
	I0919 12:30:14.186338    5991 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:30:14.186367    5991 client.go:171] duration metric: took 239.589125ms to LocalClient.Create
	I0919 12:30:16.188392    5991 start.go:128] duration metric: took 2.267137083s to createHost
	I0919 12:30:16.188428    5991 start.go:83] releasing machines lock for "kubenet-342000", held for 2.267219291s
	W0919 12:30:16.188455    5991 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:16.194195    5991 out.go:177] * Deleting "kubenet-342000" in qemu2 ...
	W0919 12:30:16.218642    5991 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:16.218652    5991 start.go:729] Will try again in 5 seconds ...
	I0919 12:30:21.220754    5991 start.go:360] acquireMachinesLock for kubenet-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:30:21.221337    5991 start.go:364] duration metric: took 453.083µs to acquireMachinesLock for "kubenet-342000"
	I0919 12:30:21.221434    5991 start.go:93] Provisioning new machine with config: &{Name:kubenet-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:30:21.221737    5991 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:30:21.233488    5991 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:30:21.283235    5991 start.go:159] libmachine.API.Create for "kubenet-342000" (driver="qemu2")
	I0919 12:30:21.283303    5991 client.go:168] LocalClient.Create starting
	I0919 12:30:21.283430    5991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:30:21.283505    5991 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:21.283522    5991 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:21.283583    5991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:30:21.283630    5991 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:21.283641    5991 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:21.284254    5991 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:30:21.452099    5991 main.go:141] libmachine: Creating SSH key...
	I0919 12:30:21.556754    5991 main.go:141] libmachine: Creating Disk image...
	I0919 12:30:21.556762    5991 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:30:21.556959    5991 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/disk.qcow2
	I0919 12:30:21.566871    5991 main.go:141] libmachine: STDOUT: 
	I0919 12:30:21.566895    5991 main.go:141] libmachine: STDERR: 
	I0919 12:30:21.566958    5991 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/disk.qcow2 +20000M
	I0919 12:30:21.575062    5991 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:30:21.575077    5991 main.go:141] libmachine: STDERR: 
	I0919 12:30:21.575088    5991 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/disk.qcow2
	I0919 12:30:21.575093    5991 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:30:21.575101    5991 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:30:21.575125    5991 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:fd:77:b8:60:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/kubenet-342000/disk.qcow2
	I0919 12:30:21.576792    5991 main.go:141] libmachine: STDOUT: 
	I0919 12:30:21.576806    5991 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:30:21.576821    5991 client.go:171] duration metric: took 293.521083ms to LocalClient.Create
	I0919 12:30:23.578855    5991 start.go:128] duration metric: took 2.357155125s to createHost
	I0919 12:30:23.578898    5991 start.go:83] releasing machines lock for "kubenet-342000", held for 2.357588792s
	W0919 12:30:23.579032    5991 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:23.595314    5991 out.go:201] 
	W0919 12:30:23.599381    5991 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:30:23.599404    5991 out.go:270] * 
	* 
	W0919 12:30:23.600085    5991 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:30:23.613336    5991 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.819069875s)

                                                
                                                
-- stdout --
	* [custom-flannel-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-342000" primary control-plane node in "custom-flannel-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:30:25.802692    6103 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:30:25.802835    6103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:30:25.802838    6103 out.go:358] Setting ErrFile to fd 2...
	I0919 12:30:25.802841    6103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:30:25.802979    6103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:30:25.804093    6103 out.go:352] Setting JSON to false
	I0919 12:30:25.820445    6103 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3590,"bootTime":1726770635,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:30:25.820525    6103 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:30:25.828506    6103 out.go:177] * [custom-flannel-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:30:25.838729    6103 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:30:25.838764    6103 notify.go:220] Checking for updates...
	I0919 12:30:25.845647    6103 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:30:25.848697    6103 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:30:25.850224    6103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:30:25.853605    6103 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:30:25.856676    6103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:30:25.860109    6103 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:30:25.860176    6103 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:30:25.860229    6103 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:30:25.864658    6103 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:30:25.871665    6103 start.go:297] selected driver: qemu2
	I0919 12:30:25.871675    6103 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:30:25.871682    6103 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:30:25.873997    6103 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:30:25.876734    6103 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:30:25.879788    6103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:30:25.879807    6103 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0919 12:30:25.879822    6103 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0919 12:30:25.879866    6103 start.go:340] cluster config:
	{Name:custom-flannel-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:30:25.883338    6103 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:30:25.890675    6103 out.go:177] * Starting "custom-flannel-342000" primary control-plane node in "custom-flannel-342000" cluster
	I0919 12:30:25.894726    6103 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:30:25.894743    6103 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:30:25.894755    6103 cache.go:56] Caching tarball of preloaded images
	I0919 12:30:25.894832    6103 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:30:25.894845    6103 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:30:25.894913    6103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/custom-flannel-342000/config.json ...
	I0919 12:30:25.894933    6103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/custom-flannel-342000/config.json: {Name:mkbcb2f9b3cd45ef969f3b285fef15f3caa0e6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:30:25.895361    6103 start.go:360] acquireMachinesLock for custom-flannel-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:30:25.895402    6103 start.go:364] duration metric: took 33.083µs to acquireMachinesLock for "custom-flannel-342000"
	I0919 12:30:25.895417    6103 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:30:25.895448    6103 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:30:25.903704    6103 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:30:25.920849    6103 start.go:159] libmachine.API.Create for "custom-flannel-342000" (driver="qemu2")
	I0919 12:30:25.920882    6103 client.go:168] LocalClient.Create starting
	I0919 12:30:25.920950    6103 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:30:25.920981    6103 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:25.920990    6103 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:25.921027    6103 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:30:25.921049    6103 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:25.921059    6103 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:25.921414    6103 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:30:26.081041    6103 main.go:141] libmachine: Creating SSH key...
	I0919 12:30:26.141123    6103 main.go:141] libmachine: Creating Disk image...
	I0919 12:30:26.141129    6103 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:30:26.141319    6103 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/disk.qcow2
	I0919 12:30:26.150793    6103 main.go:141] libmachine: STDOUT: 
	I0919 12:30:26.150811    6103 main.go:141] libmachine: STDERR: 
	I0919 12:30:26.150878    6103 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/disk.qcow2 +20000M
	I0919 12:30:26.159324    6103 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:30:26.159344    6103 main.go:141] libmachine: STDERR: 
	I0919 12:30:26.159365    6103 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/disk.qcow2
	I0919 12:30:26.159370    6103 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:30:26.159384    6103 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:30:26.159420    6103 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:43:ce:04:06:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/disk.qcow2
	I0919 12:30:26.161292    6103 main.go:141] libmachine: STDOUT: 
	I0919 12:30:26.161304    6103 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:30:26.161325    6103 client.go:171] duration metric: took 240.443792ms to LocalClient.Create
	I0919 12:30:28.163426    6103 start.go:128] duration metric: took 2.268018292s to createHost
	I0919 12:30:28.163498    6103 start.go:83] releasing machines lock for "custom-flannel-342000", held for 2.268158666s
	W0919 12:30:28.163544    6103 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:28.177166    6103 out.go:177] * Deleting "custom-flannel-342000" in qemu2 ...
	W0919 12:30:28.204761    6103 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:28.204785    6103 start.go:729] Will try again in 5 seconds ...
	I0919 12:30:33.206958    6103 start.go:360] acquireMachinesLock for custom-flannel-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:30:33.207539    6103 start.go:364] duration metric: took 473.917µs to acquireMachinesLock for "custom-flannel-342000"
	I0919 12:30:33.207698    6103 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:30:33.208059    6103 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:30:33.219785    6103 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:30:33.273429    6103 start.go:159] libmachine.API.Create for "custom-flannel-342000" (driver="qemu2")
	I0919 12:30:33.273482    6103 client.go:168] LocalClient.Create starting
	I0919 12:30:33.273613    6103 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:30:33.273692    6103 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:33.273706    6103 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:33.273762    6103 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:30:33.273806    6103 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:33.273818    6103 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:33.274390    6103 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:30:33.445545    6103 main.go:141] libmachine: Creating SSH key...
	I0919 12:30:33.542642    6103 main.go:141] libmachine: Creating Disk image...
	I0919 12:30:33.542656    6103 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:30:33.542908    6103 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/disk.qcow2
	I0919 12:30:33.552861    6103 main.go:141] libmachine: STDOUT: 
	I0919 12:30:33.552884    6103 main.go:141] libmachine: STDERR: 
	I0919 12:30:33.552940    6103 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/disk.qcow2 +20000M
	I0919 12:30:33.561104    6103 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:30:33.561121    6103 main.go:141] libmachine: STDERR: 
	I0919 12:30:33.561140    6103 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/disk.qcow2
	I0919 12:30:33.561146    6103 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:30:33.561159    6103 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:30:33.561186    6103 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:1d:5c:be:25:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/custom-flannel-342000/disk.qcow2
	I0919 12:30:33.563023    6103 main.go:141] libmachine: STDOUT: 
	I0919 12:30:33.563037    6103 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:30:33.563058    6103 client.go:171] duration metric: took 289.58075ms to LocalClient.Create
	I0919 12:30:35.565061    6103 start.go:128] duration metric: took 2.357059625s to createHost
	I0919 12:30:35.565081    6103 start.go:83] releasing machines lock for "custom-flannel-342000", held for 2.357594542s
	W0919 12:30:35.565175    6103 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:35.568540    6103 out.go:201] 
	W0919 12:30:35.571414    6103 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:30:35.571423    6103 out.go:270] * 
	* 
	W0919 12:30:35.571855    6103 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:30:35.584355    6103 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.901212333s)

                                                
                                                
-- stdout --
	* [calico-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-342000" primary control-plane node in "calico-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:30:37.987229    6225 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:30:37.987369    6225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:30:37.987373    6225 out.go:358] Setting ErrFile to fd 2...
	I0919 12:30:37.987376    6225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:30:37.987525    6225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:30:37.988675    6225 out.go:352] Setting JSON to false
	I0919 12:30:38.005062    6225 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3602,"bootTime":1726770635,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:30:38.005137    6225 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:30:38.013124    6225 out.go:177] * [calico-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:30:38.021075    6225 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:30:38.021140    6225 notify.go:220] Checking for updates...
	I0919 12:30:38.028041    6225 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:30:38.031082    6225 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:30:38.034088    6225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:30:38.035654    6225 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:30:38.039053    6225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:30:38.042454    6225 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:30:38.042520    6225 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:30:38.042566    6225 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:30:38.046955    6225 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:30:38.054074    6225 start.go:297] selected driver: qemu2
	I0919 12:30:38.054081    6225 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:30:38.054086    6225 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:30:38.056420    6225 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:30:38.059121    6225 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:30:38.062221    6225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:30:38.062240    6225 cni.go:84] Creating CNI manager for "calico"
	I0919 12:30:38.062247    6225 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0919 12:30:38.062283    6225 start.go:340] cluster config:
	{Name:calico-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:30:38.065962    6225 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:30:38.073040    6225 out.go:177] * Starting "calico-342000" primary control-plane node in "calico-342000" cluster
	I0919 12:30:38.077056    6225 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:30:38.077071    6225 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:30:38.077083    6225 cache.go:56] Caching tarball of preloaded images
	I0919 12:30:38.077154    6225 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:30:38.077161    6225 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:30:38.077248    6225 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/calico-342000/config.json ...
	I0919 12:30:38.077261    6225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/calico-342000/config.json: {Name:mkc0a23b280cca5963a5b17a40f180643c45ee73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:30:38.077504    6225 start.go:360] acquireMachinesLock for calico-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:30:38.077538    6225 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "calico-342000"
	I0919 12:30:38.077549    6225 start.go:93] Provisioning new machine with config: &{Name:calico-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:30:38.077577    6225 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:30:38.085078    6225 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:30:38.102766    6225 start.go:159] libmachine.API.Create for "calico-342000" (driver="qemu2")
	I0919 12:30:38.102801    6225 client.go:168] LocalClient.Create starting
	I0919 12:30:38.102870    6225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:30:38.102904    6225 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:38.102914    6225 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:38.102955    6225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:30:38.102988    6225 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:38.102999    6225 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:38.103448    6225 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:30:38.265333    6225 main.go:141] libmachine: Creating SSH key...
	I0919 12:30:38.430164    6225 main.go:141] libmachine: Creating Disk image...
	I0919 12:30:38.430174    6225 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:30:38.430392    6225 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/disk.qcow2
	I0919 12:30:38.440092    6225 main.go:141] libmachine: STDOUT: 
	I0919 12:30:38.440113    6225 main.go:141] libmachine: STDERR: 
	I0919 12:30:38.440168    6225 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/disk.qcow2 +20000M
	I0919 12:30:38.448495    6225 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:30:38.448518    6225 main.go:141] libmachine: STDERR: 
	I0919 12:30:38.448531    6225 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/disk.qcow2
	I0919 12:30:38.448541    6225 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:30:38.448555    6225 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:30:38.448580    6225 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:84:39:4a:20:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/disk.qcow2
	I0919 12:30:38.450321    6225 main.go:141] libmachine: STDOUT: 
	I0919 12:30:38.450335    6225 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:30:38.450358    6225 client.go:171] duration metric: took 347.560583ms to LocalClient.Create
	I0919 12:30:40.452498    6225 start.go:128] duration metric: took 2.374962375s to createHost
	I0919 12:30:40.452603    6225 start.go:83] releasing machines lock for "calico-342000", held for 2.375127125s
	W0919 12:30:40.452660    6225 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:40.464053    6225 out.go:177] * Deleting "calico-342000" in qemu2 ...
	W0919 12:30:40.500001    6225 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:40.500044    6225 start.go:729] Will try again in 5 seconds ...
	I0919 12:30:45.502107    6225 start.go:360] acquireMachinesLock for calico-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:30:45.502570    6225 start.go:364] duration metric: took 379.5µs to acquireMachinesLock for "calico-342000"
	I0919 12:30:45.502635    6225 start.go:93] Provisioning new machine with config: &{Name:calico-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:30:45.502913    6225 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:30:45.514561    6225 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:30:45.553571    6225 start.go:159] libmachine.API.Create for "calico-342000" (driver="qemu2")
	I0919 12:30:45.553624    6225 client.go:168] LocalClient.Create starting
	I0919 12:30:45.553731    6225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:30:45.553792    6225 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:45.553809    6225 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:45.553855    6225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:30:45.553898    6225 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:45.553909    6225 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:45.554405    6225 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:30:45.719552    6225 main.go:141] libmachine: Creating SSH key...
	I0919 12:30:45.793708    6225 main.go:141] libmachine: Creating Disk image...
	I0919 12:30:45.793716    6225 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:30:45.793905    6225 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/disk.qcow2
	I0919 12:30:45.803131    6225 main.go:141] libmachine: STDOUT: 
	I0919 12:30:45.803149    6225 main.go:141] libmachine: STDERR: 
	I0919 12:30:45.803206    6225 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/disk.qcow2 +20000M
	I0919 12:30:45.811139    6225 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:30:45.811155    6225 main.go:141] libmachine: STDERR: 
	I0919 12:30:45.811167    6225 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/disk.qcow2
	I0919 12:30:45.811172    6225 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:30:45.811180    6225 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:30:45.811211    6225 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:41:bc:13:8e:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/calico-342000/disk.qcow2
	I0919 12:30:45.812929    6225 main.go:141] libmachine: STDOUT: 
	I0919 12:30:45.812944    6225 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:30:45.812957    6225 client.go:171] duration metric: took 259.336041ms to LocalClient.Create
	I0919 12:30:47.815103    6225 start.go:128] duration metric: took 2.312220708s to createHost
	I0919 12:30:47.815193    6225 start.go:83] releasing machines lock for "calico-342000", held for 2.31267375s
	W0919 12:30:47.815602    6225 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:47.831202    6225 out.go:201] 
	W0919 12:30:47.834294    6225 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:30:47.834346    6225 out.go:270] * 
	* 
	W0919 12:30:47.837300    6225 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:30:47.846168    6225 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.929917458s)

                                                
                                                
-- stdout --
	* [false-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-342000" primary control-plane node in "false-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:30:50.346226    6350 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:30:50.346353    6350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:30:50.346357    6350 out.go:358] Setting ErrFile to fd 2...
	I0919 12:30:50.346359    6350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:30:50.346509    6350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:30:50.347664    6350 out.go:352] Setting JSON to false
	I0919 12:30:50.365141    6350 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3615,"bootTime":1726770635,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:30:50.365214    6350 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:30:50.372281    6350 out.go:177] * [false-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:30:50.381198    6350 notify.go:220] Checking for updates...
	I0919 12:30:50.386072    6350 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:30:50.394103    6350 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:30:50.402082    6350 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:30:50.409965    6350 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:30:50.413058    6350 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:30:50.416120    6350 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:30:50.419367    6350 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:30:50.419445    6350 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:30:50.419489    6350 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:30:50.424038    6350 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:30:50.431036    6350 start.go:297] selected driver: qemu2
	I0919 12:30:50.431042    6350 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:30:50.431048    6350 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:30:50.433433    6350 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:30:50.436034    6350 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:30:50.439092    6350 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:30:50.439115    6350 cni.go:84] Creating CNI manager for "false"
	I0919 12:30:50.439143    6350 start.go:340] cluster config:
	{Name:false-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:30:50.442601    6350 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:30:50.450118    6350 out.go:177] * Starting "false-342000" primary control-plane node in "false-342000" cluster
	I0919 12:30:50.454047    6350 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:30:50.454059    6350 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:30:50.454063    6350 cache.go:56] Caching tarball of preloaded images
	I0919 12:30:50.454116    6350 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:30:50.454121    6350 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:30:50.454168    6350 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/false-342000/config.json ...
	I0919 12:30:50.454177    6350 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/false-342000/config.json: {Name:mk57998265a9683a190086b93c5cdf6dca30a1d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:30:50.454375    6350 start.go:360] acquireMachinesLock for false-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:30:50.454404    6350 start.go:364] duration metric: took 24.333µs to acquireMachinesLock for "false-342000"
	I0919 12:30:50.454414    6350 start.go:93] Provisioning new machine with config: &{Name:false-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:30:50.454438    6350 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:30:50.463062    6350 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:30:50.478917    6350 start.go:159] libmachine.API.Create for "false-342000" (driver="qemu2")
	I0919 12:30:50.478946    6350 client.go:168] LocalClient.Create starting
	I0919 12:30:50.479001    6350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:30:50.479029    6350 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:50.479041    6350 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:50.479090    6350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:30:50.479112    6350 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:50.479122    6350 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:50.479475    6350 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:30:50.643329    6350 main.go:141] libmachine: Creating SSH key...
	I0919 12:30:50.729769    6350 main.go:141] libmachine: Creating Disk image...
	I0919 12:30:50.729776    6350 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:30:50.729973    6350 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/disk.qcow2
	I0919 12:30:50.739336    6350 main.go:141] libmachine: STDOUT: 
	I0919 12:30:50.739367    6350 main.go:141] libmachine: STDERR: 
	I0919 12:30:50.739435    6350 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/disk.qcow2 +20000M
	I0919 12:30:50.747378    6350 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:30:50.747392    6350 main.go:141] libmachine: STDERR: 
	I0919 12:30:50.747406    6350 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/disk.qcow2
	I0919 12:30:50.747411    6350 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:30:50.747423    6350 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:30:50.747448    6350 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:bd:ce:0e:f4:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/disk.qcow2
	I0919 12:30:50.749149    6350 main.go:141] libmachine: STDOUT: 
	I0919 12:30:50.749163    6350 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:30:50.749187    6350 client.go:171] duration metric: took 270.242333ms to LocalClient.Create
	I0919 12:30:52.751314    6350 start.go:128] duration metric: took 2.296926125s to createHost
	I0919 12:30:52.751374    6350 start.go:83] releasing machines lock for "false-342000", held for 2.297033542s
	W0919 12:30:52.751407    6350 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:52.761771    6350 out.go:177] * Deleting "false-342000" in qemu2 ...
	W0919 12:30:52.794746    6350 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:30:52.794776    6350 start.go:729] Will try again in 5 seconds ...
	I0919 12:30:57.796834    6350 start.go:360] acquireMachinesLock for false-342000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:30:57.796956    6350 start.go:364] duration metric: took 95.375µs to acquireMachinesLock for "false-342000"
	I0919 12:30:57.797018    6350 start.go:93] Provisioning new machine with config: &{Name:false-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:30:57.797064    6350 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:30:57.807523    6350 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 12:30:57.823664    6350 start.go:159] libmachine.API.Create for "false-342000" (driver="qemu2")
	I0919 12:30:57.823699    6350 client.go:168] LocalClient.Create starting
	I0919 12:30:57.823801    6350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:30:57.823844    6350 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:57.823853    6350 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:57.823894    6350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:30:57.823920    6350 main.go:141] libmachine: Decoding PEM data...
	I0919 12:30:57.823928    6350 main.go:141] libmachine: Parsing certificate...
	I0919 12:30:57.824238    6350 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:30:57.991808    6350 main.go:141] libmachine: Creating SSH key...
	I0919 12:30:58.181539    6350 main.go:141] libmachine: Creating Disk image...
	I0919 12:30:58.181549    6350 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:30:58.181758    6350 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/disk.qcow2
	I0919 12:30:58.191379    6350 main.go:141] libmachine: STDOUT: 
	I0919 12:30:58.191400    6350 main.go:141] libmachine: STDERR: 
	I0919 12:30:58.191472    6350 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/disk.qcow2 +20000M
	I0919 12:30:58.199672    6350 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:30:58.199688    6350 main.go:141] libmachine: STDERR: 
	I0919 12:30:58.199702    6350 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/disk.qcow2
	I0919 12:30:58.199716    6350 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:30:58.199725    6350 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:30:58.199758    6350 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:b0:a8:14:1b:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/false-342000/disk.qcow2
	I0919 12:30:58.201444    6350 main.go:141] libmachine: STDOUT: 
	I0919 12:30:58.201458    6350 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:30:58.201470    6350 client.go:171] duration metric: took 377.77975ms to LocalClient.Create
	I0919 12:31:00.203651    6350 start.go:128] duration metric: took 2.406637208s to createHost
	I0919 12:31:00.203756    6350 start.go:83] releasing machines lock for "false-342000", held for 2.406842375s
	W0919 12:31:00.204194    6350 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:00.210562    6350 out.go:201] 
	W0919 12:31:00.222702    6350 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:31:00.222765    6350 out.go:270] * 
	* 
	W0919 12:31:00.224736    6350 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:31:00.237508    6350 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-029000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-029000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.928074791s)

                                                
                                                
-- stdout --
	* [old-k8s-version-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-029000" primary control-plane node in "old-k8s-version-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:31:02.470353    6513 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:31:02.470506    6513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:02.470510    6513 out.go:358] Setting ErrFile to fd 2...
	I0919 12:31:02.470512    6513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:02.470627    6513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:31:02.471755    6513 out.go:352] Setting JSON to false
	I0919 12:31:02.490307    6513 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3627,"bootTime":1726770635,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:31:02.490415    6513 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:31:02.496716    6513 out.go:177] * [old-k8s-version-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:31:02.504848    6513 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:31:02.504906    6513 notify.go:220] Checking for updates...
	I0919 12:31:02.515352    6513 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:31:02.518224    6513 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:31:02.521528    6513 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:31:02.524432    6513 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:31:02.525930    6513 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:31:02.529683    6513 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:31:02.529750    6513 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:31:02.529795    6513 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:31:02.535062    6513 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:31:02.541658    6513 start.go:297] selected driver: qemu2
	I0919 12:31:02.541664    6513 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:31:02.541669    6513 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:31:02.543903    6513 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:31:02.547839    6513 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:31:02.552958    6513 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:31:02.552976    6513 cni.go:84] Creating CNI manager for ""
	I0919 12:31:02.553000    6513 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 12:31:02.553033    6513 start.go:340] cluster config:
	{Name:old-k8s-version-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:31:02.556596    6513 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:02.565109    6513 out.go:177] * Starting "old-k8s-version-029000" primary control-plane node in "old-k8s-version-029000" cluster
	I0919 12:31:02.570332    6513 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0919 12:31:02.570346    6513 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0919 12:31:02.570353    6513 cache.go:56] Caching tarball of preloaded images
	I0919 12:31:02.570405    6513 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:31:02.570411    6513 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0919 12:31:02.570473    6513 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/old-k8s-version-029000/config.json ...
	I0919 12:31:02.570484    6513 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/old-k8s-version-029000/config.json: {Name:mk52615527fac3b5bdbf67b7ff04a803bffba2c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:31:02.570733    6513 start.go:360] acquireMachinesLock for old-k8s-version-029000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:02.570771    6513 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "old-k8s-version-029000"
	I0919 12:31:02.570781    6513 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:31:02.570806    6513 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:31:02.579752    6513 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:31:02.596103    6513 start.go:159] libmachine.API.Create for "old-k8s-version-029000" (driver="qemu2")
	I0919 12:31:02.596136    6513 client.go:168] LocalClient.Create starting
	I0919 12:31:02.596215    6513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:31:02.596250    6513 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:02.596283    6513 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:02.596322    6513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:31:02.596347    6513 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:02.596353    6513 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:02.596723    6513 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:31:02.773378    6513 main.go:141] libmachine: Creating SSH key...
	I0919 12:31:02.861467    6513 main.go:141] libmachine: Creating Disk image...
	I0919 12:31:02.861481    6513 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:31:02.861691    6513 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2
	I0919 12:31:02.870990    6513 main.go:141] libmachine: STDOUT: 
	I0919 12:31:02.871009    6513 main.go:141] libmachine: STDERR: 
	I0919 12:31:02.871064    6513 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2 +20000M
	I0919 12:31:02.879194    6513 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:31:02.879211    6513 main.go:141] libmachine: STDERR: 
	I0919 12:31:02.879238    6513 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2
	I0919 12:31:02.879244    6513 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:31:02.879255    6513 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:02.879281    6513 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:4b:a3:f2:72:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2
	I0919 12:31:02.880974    6513 main.go:141] libmachine: STDOUT: 
	I0919 12:31:02.880989    6513 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:02.881011    6513 client.go:171] duration metric: took 284.875667ms to LocalClient.Create
	I0919 12:31:04.883116    6513 start.go:128] duration metric: took 2.312360667s to createHost
	I0919 12:31:04.883170    6513 start.go:83] releasing machines lock for "old-k8s-version-029000", held for 2.312463792s
	W0919 12:31:04.883212    6513 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:04.899691    6513 out.go:177] * Deleting "old-k8s-version-029000" in qemu2 ...
	W0919 12:31:04.948546    6513 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:04.948569    6513 start.go:729] Will try again in 5 seconds ...
	I0919 12:31:09.950626    6513 start.go:360] acquireMachinesLock for old-k8s-version-029000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:09.951065    6513 start.go:364] duration metric: took 344.916µs to acquireMachinesLock for "old-k8s-version-029000"
	I0919 12:31:09.951160    6513 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:31:09.951332    6513 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:31:09.957221    6513 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:31:10.003462    6513 start.go:159] libmachine.API.Create for "old-k8s-version-029000" (driver="qemu2")
	I0919 12:31:10.003527    6513 client.go:168] LocalClient.Create starting
	I0919 12:31:10.003632    6513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:31:10.003705    6513 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:10.003723    6513 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:10.003786    6513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:31:10.003831    6513 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:10.003842    6513 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:10.004308    6513 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:31:10.231966    6513 main.go:141] libmachine: Creating SSH key...
	I0919 12:31:10.297178    6513 main.go:141] libmachine: Creating Disk image...
	I0919 12:31:10.297187    6513 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:31:10.297382    6513 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2
	I0919 12:31:10.306989    6513 main.go:141] libmachine: STDOUT: 
	I0919 12:31:10.307005    6513 main.go:141] libmachine: STDERR: 
	I0919 12:31:10.307063    6513 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2 +20000M
	I0919 12:31:10.315260    6513 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:31:10.315284    6513 main.go:141] libmachine: STDERR: 
	I0919 12:31:10.315296    6513 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2
	I0919 12:31:10.315302    6513 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:31:10.315313    6513 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:10.315338    6513 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:54:8c:19:9c:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2
	I0919 12:31:10.317043    6513 main.go:141] libmachine: STDOUT: 
	I0919 12:31:10.317058    6513 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:10.317072    6513 client.go:171] duration metric: took 313.54975ms to LocalClient.Create
	I0919 12:31:12.319180    6513 start.go:128] duration metric: took 2.367892708s to createHost
	I0919 12:31:12.319236    6513 start.go:83] releasing machines lock for "old-k8s-version-029000", held for 2.368225333s
	W0919 12:31:12.319554    6513 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:12.333283    6513 out.go:201] 
	W0919 12:31:12.338137    6513 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:31:12.338160    6513 out.go:270] * 
	* 
	W0919 12:31:12.339378    6513 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:31:12.355995    6513 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-029000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000: exit status 7 (56.203334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-029000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-029000 create -f testdata/busybox.yaml: exit status 1 (30.297167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-029000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-029000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000: exit status 7 (29.787625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-029000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000: exit status 7 (30.262708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-029000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-029000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-029000 describe deploy/metrics-server -n kube-system: exit status 1 (27.887541ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-029000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-029000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000: exit status 7 (29.640917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-029000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-029000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.195891541s)

                                                
                                                
-- stdout --
	* [old-k8s-version-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-029000" primary control-plane node in "old-k8s-version-029000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-029000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-029000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:31:16.130735    6577 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:31:16.130946    6577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:16.130951    6577 out.go:358] Setting ErrFile to fd 2...
	I0919 12:31:16.130953    6577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:16.131098    6577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:31:16.132509    6577 out.go:352] Setting JSON to false
	I0919 12:31:16.150921    6577 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3641,"bootTime":1726770635,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:31:16.150989    6577 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:31:16.155068    6577 out.go:177] * [old-k8s-version-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:31:16.161978    6577 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:31:16.162045    6577 notify.go:220] Checking for updates...
	I0919 12:31:16.168896    6577 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:31:16.171976    6577 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:31:16.174943    6577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:31:16.177900    6577 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:31:16.186052    6577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:31:16.189237    6577 config.go:182] Loaded profile config "old-k8s-version-029000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0919 12:31:16.192869    6577 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0919 12:31:16.195951    6577 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:31:16.199975    6577 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:31:16.206947    6577 start.go:297] selected driver: qemu2
	I0919 12:31:16.206953    6577 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:31:16.207001    6577 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:31:16.209416    6577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:31:16.209523    6577 cni.go:84] Creating CNI manager for ""
	I0919 12:31:16.209545    6577 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 12:31:16.209570    6577 start.go:340] cluster config:
	{Name:old-k8s-version-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-029000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:31:16.213078    6577 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:16.220953    6577 out.go:177] * Starting "old-k8s-version-029000" primary control-plane node in "old-k8s-version-029000" cluster
	I0919 12:31:16.224985    6577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0919 12:31:16.224998    6577 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0919 12:31:16.225006    6577 cache.go:56] Caching tarball of preloaded images
	I0919 12:31:16.225056    6577 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:31:16.225062    6577 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0919 12:31:16.225120    6577 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/old-k8s-version-029000/config.json ...
	I0919 12:31:16.225532    6577 start.go:360] acquireMachinesLock for old-k8s-version-029000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:16.225564    6577 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "old-k8s-version-029000"
	I0919 12:31:16.225572    6577 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:31:16.225581    6577 fix.go:54] fixHost starting: 
	I0919 12:31:16.225701    6577 fix.go:112] recreateIfNeeded on old-k8s-version-029000: state=Stopped err=<nil>
	W0919 12:31:16.225709    6577 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:31:16.229944    6577 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-029000" ...
	I0919 12:31:16.237890    6577 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:16.237918    6577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:54:8c:19:9c:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2
	I0919 12:31:16.239784    6577 main.go:141] libmachine: STDOUT: 
	I0919 12:31:16.239803    6577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:16.239848    6577 fix.go:56] duration metric: took 14.269375ms for fixHost
	I0919 12:31:16.239853    6577 start.go:83] releasing machines lock for "old-k8s-version-029000", held for 14.286458ms
	W0919 12:31:16.239858    6577 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:31:16.239893    6577 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:16.239898    6577 start.go:729] Will try again in 5 seconds ...
	I0919 12:31:21.242057    6577 start.go:360] acquireMachinesLock for old-k8s-version-029000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:21.242406    6577 start.go:364] duration metric: took 249.667µs to acquireMachinesLock for "old-k8s-version-029000"
	I0919 12:31:21.242494    6577 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:31:21.242506    6577 fix.go:54] fixHost starting: 
	I0919 12:31:21.242949    6577 fix.go:112] recreateIfNeeded on old-k8s-version-029000: state=Stopped err=<nil>
	W0919 12:31:21.242969    6577 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:31:21.250903    6577 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-029000" ...
	I0919 12:31:21.254852    6577 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:21.255025    6577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:54:8c:19:9c:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/old-k8s-version-029000/disk.qcow2
	I0919 12:31:21.260597    6577 main.go:141] libmachine: STDOUT: 
	I0919 12:31:21.260639    6577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:21.260693    6577 fix.go:56] duration metric: took 18.18725ms for fixHost
	I0919 12:31:21.260707    6577 start.go:83] releasing machines lock for "old-k8s-version-029000", held for 18.285291ms
	W0919 12:31:21.260816    6577 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-029000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-029000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:21.267811    6577 out.go:201] 
	W0919 12:31:21.272816    6577 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:31:21.272838    6577 out.go:270] * 
	* 
	W0919 12:31:21.274480    6577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:31:21.285843    6577 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-029000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000: exit status 7 (64.3575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-029000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000: exit status 7 (32.2725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-029000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-029000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-029000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.158791ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-029000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-029000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000: exit status 7 (30.1295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-029000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000: exit status 7 (30.051791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-029000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-029000 --alsologtostderr -v=1: exit status 83 (49.400625ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-029000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-029000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:31:21.556017    6601 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:31:21.556353    6601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:21.556357    6601 out.go:358] Setting ErrFile to fd 2...
	I0919 12:31:21.556359    6601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:21.556500    6601 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:31:21.556703    6601 out.go:352] Setting JSON to false
	I0919 12:31:21.556711    6601 mustload.go:65] Loading cluster: old-k8s-version-029000
	I0919 12:31:21.556927    6601 config.go:182] Loaded profile config "old-k8s-version-029000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0919 12:31:21.561773    6601 out.go:177] * The control-plane node old-k8s-version-029000 host is not running: state=Stopped
	I0919 12:31:21.570715    6601 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-029000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-029000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000: exit status 7 (31.087417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-029000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000: exit status 7 (29.470792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-816000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-816000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.104095125s)

                                                
                                                
-- stdout --
	* [no-preload-816000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-816000" primary control-plane node in "no-preload-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:31:21.905803    6619 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:31:21.905961    6619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:21.905964    6619 out.go:358] Setting ErrFile to fd 2...
	I0919 12:31:21.905967    6619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:21.906109    6619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:31:21.907301    6619 out.go:352] Setting JSON to false
	I0919 12:31:21.924090    6619 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3646,"bootTime":1726770635,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:31:21.924192    6619 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:31:21.930657    6619 out.go:177] * [no-preload-816000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:31:21.938919    6619 notify.go:220] Checking for updates...
	I0919 12:31:21.941743    6619 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:31:21.947686    6619 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:31:21.954848    6619 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:31:21.963855    6619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:31:21.972774    6619 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:31:21.978802    6619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:31:21.983135    6619 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:31:21.983203    6619 config.go:182] Loaded profile config "stopped-upgrade-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0919 12:31:21.983255    6619 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:31:21.987756    6619 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:31:21.994590    6619 start.go:297] selected driver: qemu2
	I0919 12:31:21.994596    6619 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:31:21.994606    6619 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:31:21.997137    6619 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:31:22.001878    6619 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:31:22.005859    6619 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:31:22.005877    6619 cni.go:84] Creating CNI manager for ""
	I0919 12:31:22.005901    6619 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:31:22.005910    6619 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 12:31:22.005931    6619 start.go:340] cluster config:
	{Name:no-preload-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:31:22.009675    6619 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:22.019610    6619 out.go:177] * Starting "no-preload-816000" primary control-plane node in "no-preload-816000" cluster
	I0919 12:31:22.023749    6619 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:31:22.023821    6619 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/no-preload-816000/config.json ...
	I0919 12:31:22.023841    6619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/no-preload-816000/config.json: {Name:mk8c982131c38925fc84ac69eacc72c51cde9279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:31:22.023878    6619 cache.go:107] acquiring lock: {Name:mk0d52bfac5dde9c7e687238a9468f2217281522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:22.023881    6619 cache.go:107] acquiring lock: {Name:mkff9a3c36c0d9594e7d8aa910cde14b4709c7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:22.023935    6619 cache.go:115] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0919 12:31:22.023945    6619 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 104.375µs
	I0919 12:31:22.023951    6619 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0919 12:31:22.023957    6619 cache.go:107] acquiring lock: {Name:mkd60969dd4a1f05dc534f3ef5151d5b0c5a81bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:22.023948    6619 cache.go:107] acquiring lock: {Name:mk18dd59a2d2a3c978b7d25deca59219f45a2e44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:22.023997    6619 cache.go:107] acquiring lock: {Name:mk2c7349dc157441f23ae96e59f66cfb0ce302bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:22.024014    6619 cache.go:107] acquiring lock: {Name:mkc806a329f45d5af8e2f9550dad0e44936b42d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:22.024081    6619 cache.go:107] acquiring lock: {Name:mk1b552b08959ee22ffe0fee5646ca259bd121e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:22.024085    6619 cache.go:107] acquiring lock: {Name:mk568249df3ed79e9203180e6b1608df3f989f5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:22.024476    6619 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0919 12:31:22.024490    6619 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0919 12:31:22.024492    6619 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0919 12:31:22.024476    6619 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0919 12:31:22.024476    6619 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0919 12:31:22.024523    6619 start.go:360] acquireMachinesLock for no-preload-816000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:22.024533    6619 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0919 12:31:22.024554    6619 start.go:364] duration metric: took 25.333µs to acquireMachinesLock for "no-preload-816000"
	I0919 12:31:22.024564    6619 start.go:93] Provisioning new machine with config: &{Name:no-preload-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:31:22.024603    6619 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:31:22.024616    6619 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0919 12:31:22.028733    6619 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:31:22.035838    6619 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0919 12:31:22.035887    6619 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0919 12:31:22.035943    6619 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0919 12:31:22.038403    6619 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0919 12:31:22.038406    6619 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0919 12:31:22.038586    6619 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0919 12:31:22.038654    6619 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0919 12:31:22.044691    6619 start.go:159] libmachine.API.Create for "no-preload-816000" (driver="qemu2")
	I0919 12:31:22.044714    6619 client.go:168] LocalClient.Create starting
	I0919 12:31:22.044786    6619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:31:22.044820    6619 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:22.044829    6619 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:22.044877    6619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:31:22.044909    6619 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:22.044920    6619 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:22.045280    6619 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:31:22.239707    6619 main.go:141] libmachine: Creating SSH key...
	I0919 12:31:22.427865    6619 main.go:141] libmachine: Creating Disk image...
	I0919 12:31:22.427887    6619 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:31:22.428091    6619 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2
	I0919 12:31:22.437713    6619 main.go:141] libmachine: STDOUT: 
	I0919 12:31:22.437734    6619 main.go:141] libmachine: STDERR: 
	I0919 12:31:22.437798    6619 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2 +20000M
	I0919 12:31:22.445411    6619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0919 12:31:22.446621    6619 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:31:22.446632    6619 main.go:141] libmachine: STDERR: 
	I0919 12:31:22.446643    6619 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2
	I0919 12:31:22.446648    6619 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:31:22.446659    6619 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:22.446686    6619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:51:46:2a:cb:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2
	I0919 12:31:22.448447    6619 main.go:141] libmachine: STDOUT: 
	I0919 12:31:22.448461    6619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:22.448480    6619 client.go:171] duration metric: took 403.770709ms to LocalClient.Create
	I0919 12:31:22.453553    6619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0919 12:31:22.456269    6619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0919 12:31:22.478120    6619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0919 12:31:22.504320    6619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0919 12:31:22.520193    6619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0919 12:31:22.568378    6619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0919 12:31:22.625216    6619 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0919 12:31:22.625229    6619 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 601.191833ms
	I0919 12:31:22.625239    6619 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0919 12:31:24.449779    6619 start.go:128] duration metric: took 2.425240625s to createHost
	I0919 12:31:24.449788    6619 start.go:83] releasing machines lock for "no-preload-816000", held for 2.425306s
	W0919 12:31:24.449800    6619 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:24.463107    6619 out.go:177] * Deleting "no-preload-816000" in qemu2 ...
	W0919 12:31:24.493149    6619 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:24.493161    6619 start.go:729] Will try again in 5 seconds ...
	I0919 12:31:25.957114    6619 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0919 12:31:25.957151    6619 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.933230333s
	I0919 12:31:25.957168    6619 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0919 12:31:26.297335    6619 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0919 12:31:26.297361    6619 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.273658333s
	I0919 12:31:26.297373    6619 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0919 12:31:26.796387    6619 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0919 12:31:26.796446    6619 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.772685458s
	I0919 12:31:26.796467    6619 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0919 12:31:26.893388    6619 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0919 12:31:26.893439    6619 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.869611334s
	I0919 12:31:26.893462    6619 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0919 12:31:27.297090    6619 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0919 12:31:27.297163    6619 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 5.273354542s
	I0919 12:31:27.297197    6619 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0919 12:31:29.493725    6619 start.go:360] acquireMachinesLock for no-preload-816000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:29.494079    6619 start.go:364] duration metric: took 277.875µs to acquireMachinesLock for "no-preload-816000"
	I0919 12:31:29.494605    6619 start.go:93] Provisioning new machine with config: &{Name:no-preload-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:31:29.494820    6619 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:31:29.525613    6619 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:31:29.573832    6619 start.go:159] libmachine.API.Create for "no-preload-816000" (driver="qemu2")
	I0919 12:31:29.573894    6619 client.go:168] LocalClient.Create starting
	I0919 12:31:29.574057    6619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:31:29.574150    6619 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:29.574175    6619 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:29.574273    6619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:31:29.574328    6619 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:29.574347    6619 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:29.574941    6619 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:31:29.846528    6619 main.go:141] libmachine: Creating SSH key...
	I0919 12:31:29.907408    6619 main.go:141] libmachine: Creating Disk image...
	I0919 12:31:29.907414    6619 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:31:29.907619    6619 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2
	I0919 12:31:29.916748    6619 main.go:141] libmachine: STDOUT: 
	I0919 12:31:29.916766    6619 main.go:141] libmachine: STDERR: 
	I0919 12:31:29.916829    6619 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2 +20000M
	I0919 12:31:29.924855    6619 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:31:29.924917    6619 main.go:141] libmachine: STDERR: 
	I0919 12:31:29.924928    6619 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2
	I0919 12:31:29.924941    6619 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:31:29.924953    6619 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:29.924996    6619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:85:93:ec:62:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2
	I0919 12:31:29.926732    6619 main.go:141] libmachine: STDOUT: 
	I0919 12:31:29.926747    6619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:29.926760    6619 client.go:171] duration metric: took 352.871458ms to LocalClient.Create
	I0919 12:31:30.656403    6619 cache.go:157] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0919 12:31:30.656459    6619 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.632766417s
	I0919 12:31:30.656477    6619 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0919 12:31:30.656534    6619 cache.go:87] Successfully saved all images to host disk.
	I0919 12:31:31.928151    6619 start.go:128] duration metric: took 2.433337541s to createHost
	I0919 12:31:31.928200    6619 start.go:83] releasing machines lock for "no-preload-816000", held for 2.4341775s
	W0919 12:31:31.928346    6619 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:31.956465    6619 out.go:201] 
	W0919 12:31:31.959526    6619 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:31:31.959545    6619 out.go:270] * 
	* 
	W0919 12:31:31.960616    6619 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:31:31.971347    6619 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-816000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000: exit status 7 (47.910667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-816000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-816000 create -f testdata/busybox.yaml: exit status 1 (28.829666ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-816000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-816000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000: exit status 7 (29.89575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-816000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000: exit status 7 (29.318542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-816000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-816000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-816000 describe deploy/metrics-server -n kube-system: exit status 1 (26.180041ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-816000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-816000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000: exit status 7 (29.32525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-816000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-816000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (7.330144959s)

                                                
                                                
-- stdout --
	* [no-preload-816000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-816000" primary control-plane node in "no-preload-816000" cluster
	* Restarting existing qemu2 VM for "no-preload-816000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-816000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:31:35.841877    6701 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:31:35.841993    6701 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:35.841997    6701 out.go:358] Setting ErrFile to fd 2...
	I0919 12:31:35.841999    6701 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:35.842139    6701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:31:35.843230    6701 out.go:352] Setting JSON to false
	I0919 12:31:35.859902    6701 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3660,"bootTime":1726770635,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:31:35.859968    6701 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:31:35.866350    6701 out.go:177] * [no-preload-816000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:31:35.874388    6701 notify.go:220] Checking for updates...
	I0919 12:31:35.879373    6701 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:31:35.887309    6701 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:31:35.894317    6701 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:31:35.901233    6701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:31:35.909350    6701 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:31:35.916292    6701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:31:35.920601    6701 config.go:182] Loaded profile config "no-preload-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:31:35.920877    6701 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:31:35.925320    6701 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:31:35.932335    6701 start.go:297] selected driver: qemu2
	I0919 12:31:35.932341    6701 start.go:901] validating driver "qemu2" against &{Name:no-preload-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:31:35.932395    6701 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:31:35.934792    6701 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:31:35.934818    6701 cni.go:84] Creating CNI manager for ""
	I0919 12:31:35.934842    6701 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:31:35.934862    6701 start.go:340] cluster config:
	{Name:no-preload-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-816000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:31:35.938828    6701 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:35.948296    6701 out.go:177] * Starting "no-preload-816000" primary control-plane node in "no-preload-816000" cluster
	I0919 12:31:35.951364    6701 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:31:35.951431    6701 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/no-preload-816000/config.json ...
	I0919 12:31:35.951463    6701 cache.go:107] acquiring lock: {Name:mk0d52bfac5dde9c7e687238a9468f2217281522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:35.951484    6701 cache.go:107] acquiring lock: {Name:mkc806a329f45d5af8e2f9550dad0e44936b42d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:35.951537    6701 cache.go:115] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0919 12:31:35.951546    6701 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 88.833µs
	I0919 12:31:35.951554    6701 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0919 12:31:35.951564    6701 cache.go:107] acquiring lock: {Name:mk1b552b08959ee22ffe0fee5646ca259bd121e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:35.951571    6701 cache.go:115] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0919 12:31:35.951583    6701 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 98.917µs
	I0919 12:31:35.951595    6701 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0919 12:31:35.951605    6701 cache.go:115] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0919 12:31:35.951610    6701 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 47µs
	I0919 12:31:35.951614    6701 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0919 12:31:35.951611    6701 cache.go:107] acquiring lock: {Name:mkd60969dd4a1f05dc534f3ef5151d5b0c5a81bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:35.951619    6701 cache.go:107] acquiring lock: {Name:mk18dd59a2d2a3c978b7d25deca59219f45a2e44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:35.951602    6701 cache.go:107] acquiring lock: {Name:mk568249df3ed79e9203180e6b1608df3f989f5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:35.951652    6701 cache.go:115] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0919 12:31:35.951463    6701 cache.go:107] acquiring lock: {Name:mkff9a3c36c0d9594e7d8aa910cde14b4709c7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:35.951656    6701 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 46.042µs
	I0919 12:31:35.951660    6701 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0919 12:31:35.951592    6701 cache.go:107] acquiring lock: {Name:mk2c7349dc157441f23ae96e59f66cfb0ce302bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:35.951678    6701 cache.go:115] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0919 12:31:35.951688    6701 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 78.375µs
	I0919 12:31:35.951693    6701 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0919 12:31:35.951758    6701 cache.go:115] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0919 12:31:35.951767    6701 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 309.083µs
	I0919 12:31:35.951775    6701 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0919 12:31:35.951803    6701 cache.go:115] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0919 12:31:35.951808    6701 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 239.167µs
	I0919 12:31:35.951815    6701 cache.go:115] /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0919 12:31:35.951818    6701 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0919 12:31:35.951855    6701 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 250.708µs
	I0919 12:31:35.951878    6701 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0919 12:31:35.951883    6701 cache.go:87] Successfully saved all images to host disk.
	I0919 12:31:35.952013    6701 start.go:360] acquireMachinesLock for no-preload-816000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:35.952054    6701 start.go:364] duration metric: took 35.041µs to acquireMachinesLock for "no-preload-816000"
	I0919 12:31:35.952064    6701 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:31:35.952070    6701 fix.go:54] fixHost starting: 
	I0919 12:31:35.952210    6701 fix.go:112] recreateIfNeeded on no-preload-816000: state=Stopped err=<nil>
	W0919 12:31:35.952221    6701 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:31:35.960267    6701 out.go:177] * Restarting existing qemu2 VM for "no-preload-816000" ...
	I0919 12:31:35.964317    6701 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:35.964361    6701 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:85:93:ec:62:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2
	I0919 12:31:35.966549    6701 main.go:141] libmachine: STDOUT: 
	I0919 12:31:35.966567    6701 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:35.966603    6701 fix.go:56] duration metric: took 14.531542ms for fixHost
	I0919 12:31:35.966609    6701 start.go:83] releasing machines lock for "no-preload-816000", held for 14.550208ms
	W0919 12:31:35.966614    6701 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:31:35.966650    6701 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:35.966657    6701 start.go:729] Will try again in 5 seconds ...
	I0919 12:31:40.968578    6701 start.go:360] acquireMachinesLock for no-preload-816000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:43.027818    6701 start.go:364] duration metric: took 2.059248208s to acquireMachinesLock for "no-preload-816000"
	I0919 12:31:43.027888    6701 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:31:43.027906    6701 fix.go:54] fixHost starting: 
	I0919 12:31:43.028667    6701 fix.go:112] recreateIfNeeded on no-preload-816000: state=Stopped err=<nil>
	W0919 12:31:43.028698    6701 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:31:43.062393    6701 out.go:177] * Restarting existing qemu2 VM for "no-preload-816000" ...
	I0919 12:31:43.091644    6701 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:43.091924    6701 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:85:93:ec:62:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/no-preload-816000/disk.qcow2
	I0919 12:31:43.101851    6701 main.go:141] libmachine: STDOUT: 
	I0919 12:31:43.101942    6701 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:43.102034    6701 fix.go:56] duration metric: took 74.132083ms for fixHost
	I0919 12:31:43.102063    6701 start.go:83] releasing machines lock for "no-preload-816000", held for 74.204625ms
	W0919 12:31:43.102298    6701 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-816000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-816000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:43.109569    6701 out.go:201] 
	W0919 12:31:43.114733    6701 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:31:43.114796    6701 out.go:270] * 
	* 
	W0919 12:31:43.116237    6701 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:31:43.128853    6701 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-816000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000: exit status 7 (58.431166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-850000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-850000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.933824166s)

                                                
                                                
-- stdout --
	* [embed-certs-850000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-850000" primary control-plane node in "embed-certs-850000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-850000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:31:40.592937    6718 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:31:40.593064    6718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:40.593067    6718 out.go:358] Setting ErrFile to fd 2...
	I0919 12:31:40.593069    6718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:40.593200    6718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:31:40.594427    6718 out.go:352] Setting JSON to false
	I0919 12:31:40.611515    6718 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3665,"bootTime":1726770635,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:31:40.611584    6718 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:31:40.614574    6718 out.go:177] * [embed-certs-850000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:31:40.621474    6718 notify.go:220] Checking for updates...
	I0919 12:31:40.624877    6718 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:31:40.632559    6718 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:31:40.639818    6718 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:31:40.647595    6718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:31:40.655528    6718 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:31:40.663579    6718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:31:40.668041    6718 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:31:40.668130    6718 config.go:182] Loaded profile config "no-preload-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:31:40.668183    6718 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:31:40.671620    6718 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:31:40.678601    6718 start.go:297] selected driver: qemu2
	I0919 12:31:40.678608    6718 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:31:40.678614    6718 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:31:40.681302    6718 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:31:40.684520    6718 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:31:40.688667    6718 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:31:40.688683    6718 cni.go:84] Creating CNI manager for ""
	I0919 12:31:40.688707    6718 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:31:40.688719    6718 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 12:31:40.688742    6718 start.go:340] cluster config:
	{Name:embed-certs-850000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:31:40.693035    6718 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:40.701593    6718 out.go:177] * Starting "embed-certs-850000" primary control-plane node in "embed-certs-850000" cluster
	I0919 12:31:40.704689    6718 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:31:40.704714    6718 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:31:40.704724    6718 cache.go:56] Caching tarball of preloaded images
	I0919 12:31:40.704787    6718 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:31:40.704794    6718 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:31:40.704864    6718 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/embed-certs-850000/config.json ...
	I0919 12:31:40.704878    6718 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/embed-certs-850000/config.json: {Name:mkd88667ad2be05cf3126f03db9466c94374a5cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:31:40.705508    6718 start.go:360] acquireMachinesLock for embed-certs-850000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:40.705549    6718 start.go:364] duration metric: took 33.125µs to acquireMachinesLock for "embed-certs-850000"
	I0919 12:31:40.705561    6718 start.go:93] Provisioning new machine with config: &{Name:embed-certs-850000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:31:40.705595    6718 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:31:40.710574    6718 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:31:40.731463    6718 start.go:159] libmachine.API.Create for "embed-certs-850000" (driver="qemu2")
	I0919 12:31:40.731507    6718 client.go:168] LocalClient.Create starting
	I0919 12:31:40.731583    6718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:31:40.731618    6718 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:40.731628    6718 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:40.731673    6718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:31:40.731702    6718 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:40.731713    6718 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:40.732097    6718 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:31:40.940924    6718 main.go:141] libmachine: Creating SSH key...
	I0919 12:31:41.006053    6718 main.go:141] libmachine: Creating Disk image...
	I0919 12:31:41.006062    6718 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:31:41.006279    6718 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2
	I0919 12:31:41.015596    6718 main.go:141] libmachine: STDOUT: 
	I0919 12:31:41.015616    6718 main.go:141] libmachine: STDERR: 
	I0919 12:31:41.015677    6718 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2 +20000M
	I0919 12:31:41.023768    6718 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:31:41.023782    6718 main.go:141] libmachine: STDERR: 
	I0919 12:31:41.023796    6718 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2
	I0919 12:31:41.023799    6718 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:31:41.023810    6718 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:41.023833    6718 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:d4:c3:9d:94:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2
	I0919 12:31:41.025394    6718 main.go:141] libmachine: STDOUT: 
	I0919 12:31:41.025407    6718 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:41.025435    6718 client.go:171] duration metric: took 293.929875ms to LocalClient.Create
	I0919 12:31:43.027560    6718 start.go:128] duration metric: took 2.322020042s to createHost
	I0919 12:31:43.027627    6718 start.go:83] releasing machines lock for "embed-certs-850000", held for 2.322140333s
	W0919 12:31:43.027675    6718 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:43.087463    6718 out.go:177] * Deleting "embed-certs-850000" in qemu2 ...
	W0919 12:31:43.146823    6718 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:43.146856    6718 start.go:729] Will try again in 5 seconds ...
	I0919 12:31:48.148945    6718 start.go:360] acquireMachinesLock for embed-certs-850000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:48.149320    6718 start.go:364] duration metric: took 299.292µs to acquireMachinesLock for "embed-certs-850000"
	I0919 12:31:48.149432    6718 start.go:93] Provisioning new machine with config: &{Name:embed-certs-850000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:31:48.149689    6718 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:31:48.159481    6718 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:31:48.211407    6718 start.go:159] libmachine.API.Create for "embed-certs-850000" (driver="qemu2")
	I0919 12:31:48.211452    6718 client.go:168] LocalClient.Create starting
	I0919 12:31:48.211566    6718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:31:48.211624    6718 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:48.211640    6718 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:48.211714    6718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:31:48.211759    6718 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:48.211774    6718 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:48.212595    6718 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:31:48.388188    6718 main.go:141] libmachine: Creating SSH key...
	I0919 12:31:48.423128    6718 main.go:141] libmachine: Creating Disk image...
	I0919 12:31:48.423134    6718 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:31:48.423318    6718 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2
	I0919 12:31:48.432407    6718 main.go:141] libmachine: STDOUT: 
	I0919 12:31:48.432428    6718 main.go:141] libmachine: STDERR: 
	I0919 12:31:48.432477    6718 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2 +20000M
	I0919 12:31:48.440316    6718 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:31:48.440333    6718 main.go:141] libmachine: STDERR: 
	I0919 12:31:48.440344    6718 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2
	I0919 12:31:48.440349    6718 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:31:48.440357    6718 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:48.440379    6718 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:3a:12:10:ea:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2
	I0919 12:31:48.441934    6718 main.go:141] libmachine: STDOUT: 
	I0919 12:31:48.441949    6718 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:48.441961    6718 client.go:171] duration metric: took 230.511333ms to LocalClient.Create
	I0919 12:31:50.444054    6718 start.go:128] duration metric: took 2.294411917s to createHost
	I0919 12:31:50.444226    6718 start.go:83] releasing machines lock for "embed-certs-850000", held for 2.294855625s
	W0919 12:31:50.444505    6718 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:50.456349    6718 out.go:201] 
	W0919 12:31:50.466579    6718 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:31:50.466649    6718 out.go:270] * 
	* 
	W0919 12:31:50.469090    6718 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:31:50.481416    6718 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-850000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000: exit status 7 (64.544667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-816000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000: exit status 7 (31.174084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-816000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-816000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-816000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.159125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-816000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-816000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000: exit status 7 (28.672042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-816000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000: exit status 7 (28.71025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-816000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-816000 --alsologtostderr -v=1: exit status 83 (49.911666ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-816000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-816000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:31:43.388681    6740 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:31:43.388819    6740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:43.388822    6740 out.go:358] Setting ErrFile to fd 2...
	I0919 12:31:43.388825    6740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:43.388948    6740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:31:43.389142    6740 out.go:352] Setting JSON to false
	I0919 12:31:43.389148    6740 mustload.go:65] Loading cluster: no-preload-816000
	I0919 12:31:43.389355    6740 config.go:182] Loaded profile config "no-preload-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:31:43.393556    6740 out.go:177] * The control-plane node no-preload-816000 host is not running: state=Stopped
	I0919 12:31:43.405630    6740 out.go:177]   To start a cluster, run: "minikube start -p no-preload-816000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-816000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000: exit status 7 (29.361959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-816000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000: exit status 7 (28.123792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-520000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
E0919 12:31:47.760381    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-520000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.132106333s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-520000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-520000" primary control-plane node in "default-k8s-diff-port-520000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-520000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:31:43.819024    6764 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:31:43.819155    6764 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:43.819158    6764 out.go:358] Setting ErrFile to fd 2...
	I0919 12:31:43.819161    6764 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:43.819270    6764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:31:43.820372    6764 out.go:352] Setting JSON to false
	I0919 12:31:43.836801    6764 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3668,"bootTime":1726770635,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:31:43.836873    6764 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:31:43.839009    6764 out.go:177] * [default-k8s-diff-port-520000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:31:43.846520    6764 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:31:43.846554    6764 notify.go:220] Checking for updates...
	I0919 12:31:43.852479    6764 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:31:43.855524    6764 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:31:43.857215    6764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:31:43.860565    6764 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:31:43.863547    6764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:31:43.866824    6764 config.go:182] Loaded profile config "embed-certs-850000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:31:43.866889    6764 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:31:43.866939    6764 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:31:43.871482    6764 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:31:43.878523    6764 start.go:297] selected driver: qemu2
	I0919 12:31:43.878530    6764 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:31:43.878536    6764 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:31:43.880927    6764 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 12:31:43.885486    6764 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:31:43.888573    6764 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:31:43.888588    6764 cni.go:84] Creating CNI manager for ""
	I0919 12:31:43.888608    6764 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:31:43.888621    6764 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 12:31:43.888656    6764 start.go:340] cluster config:
	{Name:default-k8s-diff-port-520000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:31:43.892482    6764 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:43.899585    6764 out.go:177] * Starting "default-k8s-diff-port-520000" primary control-plane node in "default-k8s-diff-port-520000" cluster
	I0919 12:31:43.903494    6764 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:31:43.903508    6764 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:31:43.903516    6764 cache.go:56] Caching tarball of preloaded images
	I0919 12:31:43.903570    6764 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:31:43.903575    6764 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:31:43.903631    6764 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/default-k8s-diff-port-520000/config.json ...
	I0919 12:31:43.903642    6764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/default-k8s-diff-port-520000/config.json: {Name:mk0273a266db01932f5ad1cea61bbcf42163a636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:31:43.903868    6764 start.go:360] acquireMachinesLock for default-k8s-diff-port-520000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:43.903905    6764 start.go:364] duration metric: took 28.917µs to acquireMachinesLock for "default-k8s-diff-port-520000"
	I0919 12:31:43.903917    6764 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:31:43.903964    6764 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:31:43.912947    6764 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:31:43.932031    6764 start.go:159] libmachine.API.Create for "default-k8s-diff-port-520000" (driver="qemu2")
	I0919 12:31:43.932063    6764 client.go:168] LocalClient.Create starting
	I0919 12:31:43.932131    6764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:31:43.932164    6764 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:43.932174    6764 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:43.932214    6764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:31:43.932239    6764 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:43.932247    6764 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:43.932622    6764 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:31:44.090873    6764 main.go:141] libmachine: Creating SSH key...
	I0919 12:31:44.351523    6764 main.go:141] libmachine: Creating Disk image...
	I0919 12:31:44.351533    6764 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:31:44.351815    6764 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2
	I0919 12:31:44.361277    6764 main.go:141] libmachine: STDOUT: 
	I0919 12:31:44.361300    6764 main.go:141] libmachine: STDERR: 
	I0919 12:31:44.361377    6764 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2 +20000M
	I0919 12:31:44.369557    6764 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:31:44.369580    6764 main.go:141] libmachine: STDERR: 
	I0919 12:31:44.369596    6764 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2
	I0919 12:31:44.369602    6764 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:31:44.369615    6764 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:44.369641    6764 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ae:e1:8e:a4:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2
	I0919 12:31:44.371262    6764 main.go:141] libmachine: STDOUT: 
	I0919 12:31:44.371276    6764 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:44.371296    6764 client.go:171] duration metric: took 439.239041ms to LocalClient.Create
	I0919 12:31:46.373411    6764 start.go:128] duration metric: took 2.469500334s to createHost
	I0919 12:31:46.373477    6764 start.go:83] releasing machines lock for "default-k8s-diff-port-520000", held for 2.469639125s
	W0919 12:31:46.373513    6764 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:46.391534    6764 out.go:177] * Deleting "default-k8s-diff-port-520000" in qemu2 ...
	W0919 12:31:46.426174    6764 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:46.426275    6764 start.go:729] Will try again in 5 seconds ...
	I0919 12:31:51.428261    6764 start.go:360] acquireMachinesLock for default-k8s-diff-port-520000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:51.428497    6764 start.go:364] duration metric: took 164.959µs to acquireMachinesLock for "default-k8s-diff-port-520000"
	I0919 12:31:51.428584    6764 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:31:51.428769    6764 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:31:51.437248    6764 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:31:51.485579    6764 start.go:159] libmachine.API.Create for "default-k8s-diff-port-520000" (driver="qemu2")
	I0919 12:31:51.485629    6764 client.go:168] LocalClient.Create starting
	I0919 12:31:51.485744    6764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:31:51.485793    6764 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:51.485810    6764 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:51.485874    6764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:31:51.485910    6764 main.go:141] libmachine: Decoding PEM data...
	I0919 12:31:51.485930    6764 main.go:141] libmachine: Parsing certificate...
	I0919 12:31:51.486431    6764 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:31:51.743979    6764 main.go:141] libmachine: Creating SSH key...
	I0919 12:31:51.854550    6764 main.go:141] libmachine: Creating Disk image...
	I0919 12:31:51.854559    6764 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:31:51.854738    6764 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2
	I0919 12:31:51.863811    6764 main.go:141] libmachine: STDOUT: 
	I0919 12:31:51.863827    6764 main.go:141] libmachine: STDERR: 
	I0919 12:31:51.863910    6764 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2 +20000M
	I0919 12:31:51.871745    6764 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:31:51.871768    6764 main.go:141] libmachine: STDERR: 
	I0919 12:31:51.871781    6764 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2
	I0919 12:31:51.871788    6764 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:31:51.871801    6764 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:51.871832    6764 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:e1:1e:19:91:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2
	I0919 12:31:51.873427    6764 main.go:141] libmachine: STDOUT: 
	I0919 12:31:51.873441    6764 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:51.873459    6764 client.go:171] duration metric: took 387.834958ms to LocalClient.Create
	I0919 12:31:53.875813    6764 start.go:128] duration metric: took 2.446958375s to createHost
	I0919 12:31:53.875926    6764 start.go:83] releasing machines lock for "default-k8s-diff-port-520000", held for 2.447488459s
	W0919 12:31:53.876271    6764 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:53.885231    6764 out.go:201] 
	W0919 12:31:53.895470    6764 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:31:53.895509    6764 out.go:270] * 
	* 
	W0919 12:31:53.898304    6764 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:31:53.909240    6764 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-520000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000: exit status 7 (62.1535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-850000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-850000 create -f testdata/busybox.yaml: exit status 1 (29.101084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-850000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-850000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000: exit status 7 (29.064417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-850000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000: exit status 7 (28.655625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-850000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-850000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-850000 describe deploy/metrics-server -n kube-system: exit status 1 (26.678959ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-850000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-850000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000: exit status 7 (28.728958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-520000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-520000 create -f testdata/busybox.yaml: exit status 1 (29.381875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-520000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-520000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000: exit status 7 (28.115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-520000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000: exit status 7 (28.384375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-520000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-520000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-520000 describe deploy/metrics-server -n kube-system: exit status 1 (26.541083ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-520000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-520000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000: exit status 7 (28.553875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-850000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
E0919 12:31:56.015245    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-850000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.222793333s)

                                                
                                                
-- stdout --
	* [embed-certs-850000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-850000" primary control-plane node in "embed-certs-850000" cluster
	* Restarting existing qemu2 VM for "embed-certs-850000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-850000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:31:54.581068    6843 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:31:54.581189    6843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:54.581193    6843 out.go:358] Setting ErrFile to fd 2...
	I0919 12:31:54.581195    6843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:54.581325    6843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:31:54.582583    6843 out.go:352] Setting JSON to false
	I0919 12:31:54.600209    6843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3679,"bootTime":1726770635,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:31:54.600319    6843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:31:54.604203    6843 out.go:177] * [embed-certs-850000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:31:54.611260    6843 notify.go:220] Checking for updates...
	I0919 12:31:54.615176    6843 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:31:54.623045    6843 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:31:54.630221    6843 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:31:54.639177    6843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:31:54.649191    6843 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:31:54.657276    6843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:31:54.662547    6843 config.go:182] Loaded profile config "embed-certs-850000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:31:54.662851    6843 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:31:54.667140    6843 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:31:54.675109    6843 start.go:297] selected driver: qemu2
	I0919 12:31:54.675115    6843 start.go:901] validating driver "qemu2" against &{Name:embed-certs-850000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:31:54.675191    6843 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:31:54.677856    6843 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:31:54.677886    6843 cni.go:84] Creating CNI manager for ""
	I0919 12:31:54.677918    6843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:31:54.677941    6843 start.go:340] cluster config:
	{Name:embed-certs-850000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-850000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:31:54.682222    6843 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:54.691149    6843 out.go:177] * Starting "embed-certs-850000" primary control-plane node in "embed-certs-850000" cluster
	I0919 12:31:54.695148    6843 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:31:54.695180    6843 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:31:54.695190    6843 cache.go:56] Caching tarball of preloaded images
	I0919 12:31:54.695258    6843 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:31:54.695266    6843 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:31:54.695330    6843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/embed-certs-850000/config.json ...
	I0919 12:31:54.695893    6843 start.go:360] acquireMachinesLock for embed-certs-850000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:54.695935    6843 start.go:364] duration metric: took 35.625µs to acquireMachinesLock for "embed-certs-850000"
	I0919 12:31:54.695949    6843 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:31:54.695959    6843 fix.go:54] fixHost starting: 
	I0919 12:31:54.696114    6843 fix.go:112] recreateIfNeeded on embed-certs-850000: state=Stopped err=<nil>
	W0919 12:31:54.696124    6843 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:31:54.704188    6843 out.go:177] * Restarting existing qemu2 VM for "embed-certs-850000" ...
	I0919 12:31:54.708170    6843 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:54.708218    6843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:3a:12:10:ea:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2
	I0919 12:31:54.710828    6843 main.go:141] libmachine: STDOUT: 
	I0919 12:31:54.710851    6843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:54.710891    6843 fix.go:56] duration metric: took 14.934083ms for fixHost
	I0919 12:31:54.710896    6843 start.go:83] releasing machines lock for "embed-certs-850000", held for 14.955333ms
	W0919 12:31:54.710915    6843 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:31:54.710964    6843 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:54.710971    6843 start.go:729] Will try again in 5 seconds ...
	I0919 12:31:59.713054    6843 start.go:360] acquireMachinesLock for embed-certs-850000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:59.713413    6843 start.go:364] duration metric: took 269.208µs to acquireMachinesLock for "embed-certs-850000"
	I0919 12:31:59.713525    6843 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:31:59.713548    6843 fix.go:54] fixHost starting: 
	I0919 12:31:59.714307    6843 fix.go:112] recreateIfNeeded on embed-certs-850000: state=Stopped err=<nil>
	W0919 12:31:59.714339    6843 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:31:59.723119    6843 out.go:177] * Restarting existing qemu2 VM for "embed-certs-850000" ...
	I0919 12:31:59.729086    6843 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:59.729336    6843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:3a:12:10:ea:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/embed-certs-850000/disk.qcow2
	I0919 12:31:59.738670    6843 main.go:141] libmachine: STDOUT: 
	I0919 12:31:59.738761    6843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:59.738844    6843 fix.go:56] duration metric: took 25.299375ms for fixHost
	I0919 12:31:59.738864    6843 start.go:83] releasing machines lock for "embed-certs-850000", held for 25.42975ms
	W0919 12:31:59.739202    6843 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-850000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-850000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:59.747061    6843 out.go:201] 
	W0919 12:31:59.751201    6843 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:31:59.751234    6843 out.go:270] * 
	* 
	W0919 12:31:59.753566    6843 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:31:59.761979    6843 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-850000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000: exit status 7 (65.625666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-520000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-520000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.20002375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-520000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-520000" primary control-plane node in "default-k8s-diff-port-520000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-520000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-520000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:31:57.732826    6872 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:31:57.732934    6872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:57.732937    6872 out.go:358] Setting ErrFile to fd 2...
	I0919 12:31:57.732940    6872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:31:57.733081    6872 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:31:57.734021    6872 out.go:352] Setting JSON to false
	I0919 12:31:57.750143    6872 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3682,"bootTime":1726770635,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:31:57.750214    6872 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:31:57.755219    6872 out.go:177] * [default-k8s-diff-port-520000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:31:57.764092    6872 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:31:57.764145    6872 notify.go:220] Checking for updates...
	I0919 12:31:57.771242    6872 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:31:57.774189    6872 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:31:57.777071    6872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:31:57.780200    6872 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:31:57.783234    6872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:31:57.786379    6872 config.go:182] Loaded profile config "default-k8s-diff-port-520000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:31:57.786656    6872 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:31:57.791051    6872 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:31:57.798098    6872 start.go:297] selected driver: qemu2
	I0919 12:31:57.798106    6872 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:31:57.798162    6872 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:31:57.800511    6872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 12:31:57.800538    6872 cni.go:84] Creating CNI manager for ""
	I0919 12:31:57.800558    6872 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:31:57.800590    6872 start.go:340] cluster config:
	{Name:default-k8s-diff-port-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-520000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:31:57.804381    6872 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:31:57.812087    6872 out.go:177] * Starting "default-k8s-diff-port-520000" primary control-plane node in "default-k8s-diff-port-520000" cluster
	I0919 12:31:57.817055    6872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:31:57.817070    6872 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:31:57.817079    6872 cache.go:56] Caching tarball of preloaded images
	I0919 12:31:57.817145    6872 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:31:57.817151    6872 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:31:57.817219    6872 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/default-k8s-diff-port-520000/config.json ...
	I0919 12:31:57.817645    6872 start.go:360] acquireMachinesLock for default-k8s-diff-port-520000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:31:57.817681    6872 start.go:364] duration metric: took 29.541µs to acquireMachinesLock for "default-k8s-diff-port-520000"
	I0919 12:31:57.817690    6872 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:31:57.817698    6872 fix.go:54] fixHost starting: 
	I0919 12:31:57.817829    6872 fix.go:112] recreateIfNeeded on default-k8s-diff-port-520000: state=Stopped err=<nil>
	W0919 12:31:57.817838    6872 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:31:57.821128    6872 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-520000" ...
	I0919 12:31:57.830102    6872 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:31:57.830136    6872 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:e1:1e:19:91:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2
	I0919 12:31:57.832210    6872 main.go:141] libmachine: STDOUT: 
	I0919 12:31:57.832226    6872 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:31:57.832253    6872 fix.go:56] duration metric: took 14.556875ms for fixHost
	I0919 12:31:57.832258    6872 start.go:83] releasing machines lock for "default-k8s-diff-port-520000", held for 14.572417ms
	W0919 12:31:57.832263    6872 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:31:57.832291    6872 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:31:57.832296    6872 start.go:729] Will try again in 5 seconds ...
	I0919 12:32:02.834381    6872 start.go:360] acquireMachinesLock for default-k8s-diff-port-520000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:32:02.834832    6872 start.go:364] duration metric: took 365.375µs to acquireMachinesLock for "default-k8s-diff-port-520000"
	I0919 12:32:02.834960    6872 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:32:02.834983    6872 fix.go:54] fixHost starting: 
	I0919 12:32:02.835714    6872 fix.go:112] recreateIfNeeded on default-k8s-diff-port-520000: state=Stopped err=<nil>
	W0919 12:32:02.835740    6872 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:32:02.843041    6872 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-520000" ...
	I0919 12:32:02.857239    6872 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:32:02.857439    6872 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:e1:1e:19:91:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/default-k8s-diff-port-520000/disk.qcow2
	I0919 12:32:02.867407    6872 main.go:141] libmachine: STDOUT: 
	I0919 12:32:02.867490    6872 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:32:02.867597    6872 fix.go:56] duration metric: took 32.589916ms for fixHost
	I0919 12:32:02.867616    6872 start.go:83] releasing machines lock for "default-k8s-diff-port-520000", held for 32.762292ms
	W0919 12:32:02.867831    6872 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-520000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-520000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:32:02.875057    6872 out.go:201] 
	W0919 12:32:02.879033    6872 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:32:02.879068    6872 out.go:270] * 
	* 
	W0919 12:32:02.881651    6872 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:32:02.890923    6872 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-520000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000: exit status 7 (66.629125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-850000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000: exit status 7 (32.034791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-850000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-850000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-850000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.27425ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-850000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-850000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000: exit status 7 (28.457833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-850000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000: exit status 7 (28.637833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-850000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-850000 --alsologtostderr -v=1: exit status 83 (40.813833ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-850000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-850000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:32:00.024893    6891 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:32:00.025053    6891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:32:00.025057    6891 out.go:358] Setting ErrFile to fd 2...
	I0919 12:32:00.025059    6891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:32:00.025191    6891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:32:00.025380    6891 out.go:352] Setting JSON to false
	I0919 12:32:00.025386    6891 mustload.go:65] Loading cluster: embed-certs-850000
	I0919 12:32:00.025603    6891 config.go:182] Loaded profile config "embed-certs-850000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:32:00.030022    6891 out.go:177] * The control-plane node embed-certs-850000 host is not running: state=Stopped
	I0919 12:32:00.034027    6891 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-850000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-850000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000: exit status 7 (28.723291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-850000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000: exit status 7 (28.065375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-839000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-839000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.852463833s)

                                                
                                                
-- stdout --
	* [newest-cni-839000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-839000" primary control-plane node in "newest-cni-839000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-839000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:32:00.335583    6908 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:32:00.335717    6908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:32:00.335720    6908 out.go:358] Setting ErrFile to fd 2...
	I0919 12:32:00.335723    6908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:32:00.335852    6908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:32:00.336957    6908 out.go:352] Setting JSON to false
	I0919 12:32:00.353401    6908 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3685,"bootTime":1726770635,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:32:00.353477    6908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:32:00.356993    6908 out.go:177] * [newest-cni-839000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:32:00.365063    6908 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:32:00.365121    6908 notify.go:220] Checking for updates...
	I0919 12:32:00.371984    6908 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:32:00.375026    6908 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:32:00.377977    6908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:32:00.380972    6908 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:32:00.384006    6908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:32:00.387264    6908 config.go:182] Loaded profile config "default-k8s-diff-port-520000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:32:00.387328    6908 config.go:182] Loaded profile config "multinode-327000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:32:00.387390    6908 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:32:00.391973    6908 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 12:32:00.398963    6908 start.go:297] selected driver: qemu2
	I0919 12:32:00.398969    6908 start.go:901] validating driver "qemu2" against <nil>
	I0919 12:32:00.398985    6908 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:32:00.401383    6908 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0919 12:32:00.401417    6908 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0919 12:32:00.409013    6908 out.go:177] * Automatically selected the socket_vmnet network
	I0919 12:32:00.412063    6908 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 12:32:00.412077    6908 cni.go:84] Creating CNI manager for ""
	I0919 12:32:00.412100    6908 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:32:00.412104    6908 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 12:32:00.412132    6908 start.go:340] cluster config:
	{Name:newest-cni-839000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-839000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:32:00.415978    6908 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:32:00.424001    6908 out.go:177] * Starting "newest-cni-839000" primary control-plane node in "newest-cni-839000" cluster
	I0919 12:32:00.427977    6908 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:32:00.427990    6908 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:32:00.428000    6908 cache.go:56] Caching tarball of preloaded images
	I0919 12:32:00.428058    6908 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:32:00.428063    6908 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:32:00.428119    6908 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/newest-cni-839000/config.json ...
	I0919 12:32:00.428129    6908 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/newest-cni-839000/config.json: {Name:mk3116cf601a505ccde3ed0cb2fb252776648544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 12:32:00.428373    6908 start.go:360] acquireMachinesLock for newest-cni-839000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:32:00.428406    6908 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "newest-cni-839000"
	I0919 12:32:00.428416    6908 start.go:93] Provisioning new machine with config: &{Name:newest-cni-839000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-839000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:32:00.428450    6908 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:32:00.434953    6908 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:32:00.453232    6908 start.go:159] libmachine.API.Create for "newest-cni-839000" (driver="qemu2")
	I0919 12:32:00.453269    6908 client.go:168] LocalClient.Create starting
	I0919 12:32:00.453326    6908 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:32:00.453355    6908 main.go:141] libmachine: Decoding PEM data...
	I0919 12:32:00.453365    6908 main.go:141] libmachine: Parsing certificate...
	I0919 12:32:00.453407    6908 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:32:00.453430    6908 main.go:141] libmachine: Decoding PEM data...
	I0919 12:32:00.453441    6908 main.go:141] libmachine: Parsing certificate...
	I0919 12:32:00.453785    6908 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:32:00.612332    6908 main.go:141] libmachine: Creating SSH key...
	I0919 12:32:00.694936    6908 main.go:141] libmachine: Creating Disk image...
	I0919 12:32:00.694941    6908 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:32:00.695115    6908 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2
	I0919 12:32:00.704158    6908 main.go:141] libmachine: STDOUT: 
	I0919 12:32:00.704176    6908 main.go:141] libmachine: STDERR: 
	I0919 12:32:00.704232    6908 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2 +20000M
	I0919 12:32:00.712132    6908 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:32:00.712146    6908 main.go:141] libmachine: STDERR: 
	I0919 12:32:00.712163    6908 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2
	I0919 12:32:00.712171    6908 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:32:00.712184    6908 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:32:00.712209    6908 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:33:3a:5d:a4:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2
	I0919 12:32:00.713776    6908 main.go:141] libmachine: STDOUT: 
	I0919 12:32:00.713790    6908 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:32:00.713812    6908 client.go:171] duration metric: took 260.5435ms to LocalClient.Create
	I0919 12:32:02.715927    6908 start.go:128] duration metric: took 2.28752275s to createHost
	I0919 12:32:02.716001    6908 start.go:83] releasing machines lock for "newest-cni-839000", held for 2.2876595s
	W0919 12:32:02.716036    6908 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:32:02.729957    6908 out.go:177] * Deleting "newest-cni-839000" in qemu2 ...
	W0919 12:32:02.771275    6908 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:32:02.771312    6908 start.go:729] Will try again in 5 seconds ...
	I0919 12:32:07.773413    6908 start.go:360] acquireMachinesLock for newest-cni-839000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:32:07.773965    6908 start.go:364] duration metric: took 416.167µs to acquireMachinesLock for "newest-cni-839000"
	I0919 12:32:07.774139    6908 start.go:93] Provisioning new machine with config: &{Name:newest-cni-839000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-839000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 12:32:07.774389    6908 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 12:32:07.780875    6908 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 12:32:07.832627    6908 start.go:159] libmachine.API.Create for "newest-cni-839000" (driver="qemu2")
	I0919 12:32:07.832679    6908 client.go:168] LocalClient.Create starting
	I0919 12:32:07.832786    6908 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/ca.pem
	I0919 12:32:07.832853    6908 main.go:141] libmachine: Decoding PEM data...
	I0919 12:32:07.832871    6908 main.go:141] libmachine: Parsing certificate...
	I0919 12:32:07.832932    6908 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19664-1099/.minikube/certs/cert.pem
	I0919 12:32:07.832976    6908 main.go:141] libmachine: Decoding PEM data...
	I0919 12:32:07.832989    6908 main.go:141] libmachine: Parsing certificate...
	I0919 12:32:07.833631    6908 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0919 12:32:08.003471    6908 main.go:141] libmachine: Creating SSH key...
	I0919 12:32:08.093387    6908 main.go:141] libmachine: Creating Disk image...
	I0919 12:32:08.093393    6908 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 12:32:08.093583    6908 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2.raw /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2
	I0919 12:32:08.102661    6908 main.go:141] libmachine: STDOUT: 
	I0919 12:32:08.102682    6908 main.go:141] libmachine: STDERR: 
	I0919 12:32:08.102758    6908 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2 +20000M
	I0919 12:32:08.110633    6908 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 12:32:08.110650    6908 main.go:141] libmachine: STDERR: 
	I0919 12:32:08.110662    6908 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2
	I0919 12:32:08.110668    6908 main.go:141] libmachine: Starting QEMU VM...
	I0919 12:32:08.110677    6908 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:32:08.110707    6908 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:d0:b2:cb:34:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2
	I0919 12:32:08.112318    6908 main.go:141] libmachine: STDOUT: 
	I0919 12:32:08.112333    6908 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:32:08.112345    6908 client.go:171] duration metric: took 279.668542ms to LocalClient.Create
	I0919 12:32:10.114416    6908 start.go:128] duration metric: took 2.340079333s to createHost
	I0919 12:32:10.114467    6908 start.go:83] releasing machines lock for "newest-cni-839000", held for 2.340521959s
	W0919 12:32:10.114681    6908 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-839000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-839000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:32:10.129770    6908 out.go:201] 
	W0919 12:32:10.136922    6908 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:32:10.136962    6908 out.go:270] * 
	* 
	W0919 12:32:10.138545    6908 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:32:10.149664    6908 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-839000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-839000 -n newest-cni-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-839000 -n newest-cni-839000: exit status 7 (66.584334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-520000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000: exit status 7 (31.289833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-520000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-520000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-520000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.262041ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-520000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-520000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000: exit status 7 (28.651084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-520000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000: exit status 7 (28.622542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-520000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-520000 --alsologtostderr -v=1: exit status 83 (38.015542ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-520000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-520000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:32:03.153605    6933 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:32:03.153760    6933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:32:03.153763    6933 out.go:358] Setting ErrFile to fd 2...
	I0919 12:32:03.153766    6933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:32:03.153892    6933 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:32:03.154079    6933 out.go:352] Setting JSON to false
	I0919 12:32:03.154087    6933 mustload.go:65] Loading cluster: default-k8s-diff-port-520000
	I0919 12:32:03.154295    6933 config.go:182] Loaded profile config "default-k8s-diff-port-520000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:32:03.157943    6933 out.go:177] * The control-plane node default-k8s-diff-port-520000 host is not running: state=Stopped
	I0919 12:32:03.161938    6933 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-520000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-520000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000: exit status 7 (28.584084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-520000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000: exit status 7 (27.691584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-839000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-839000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.194308708s)

                                                
                                                
-- stdout --
	* [newest-cni-839000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-839000" primary control-plane node in "newest-cni-839000" cluster
	* Restarting existing qemu2 VM for "newest-cni-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:32:12.535029    6984 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:32:12.535180    6984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:32:12.535184    6984 out.go:358] Setting ErrFile to fd 2...
	I0919 12:32:12.535187    6984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:32:12.535314    6984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:32:12.536397    6984 out.go:352] Setting JSON to false
	I0919 12:32:12.553785    6984 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3697,"bootTime":1726770635,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 12:32:12.553859    6984 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 12:32:12.558592    6984 out.go:177] * [newest-cni-839000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 12:32:12.565612    6984 notify.go:220] Checking for updates...
	I0919 12:32:12.569566    6984 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 12:32:12.572626    6984 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 12:32:12.575585    6984 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 12:32:12.578740    6984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 12:32:12.581911    6984 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 12:32:12.584866    6984 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 12:32:12.587854    6984 config.go:182] Loaded profile config "newest-cni-839000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:32:12.588124    6984 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 12:32:12.592640    6984 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 12:32:12.597569    6984 start.go:297] selected driver: qemu2
	I0919 12:32:12.597575    6984 start.go:901] validating driver "qemu2" against &{Name:newest-cni-839000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-839000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:32:12.597625    6984 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 12:32:12.600020    6984 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 12:32:12.600044    6984 cni.go:84] Creating CNI manager for ""
	I0919 12:32:12.600063    6984 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 12:32:12.600088    6984 start.go:340] cluster config:
	{Name:newest-cni-839000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-839000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 12:32:12.603544    6984 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 12:32:12.612564    6984 out.go:177] * Starting "newest-cni-839000" primary control-plane node in "newest-cni-839000" cluster
	I0919 12:32:12.616592    6984 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 12:32:12.616605    6984 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 12:32:12.616613    6984 cache.go:56] Caching tarball of preloaded images
	I0919 12:32:12.616674    6984 preload.go:172] Found /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 12:32:12.616680    6984 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 12:32:12.616743    6984 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/newest-cni-839000/config.json ...
	I0919 12:32:12.617191    6984 start.go:360] acquireMachinesLock for newest-cni-839000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:32:12.617229    6984 start.go:364] duration metric: took 31.709µs to acquireMachinesLock for "newest-cni-839000"
	I0919 12:32:12.617239    6984 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:32:12.617247    6984 fix.go:54] fixHost starting: 
	I0919 12:32:12.617376    6984 fix.go:112] recreateIfNeeded on newest-cni-839000: state=Stopped err=<nil>
	W0919 12:32:12.617385    6984 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:32:12.621616    6984 out.go:177] * Restarting existing qemu2 VM for "newest-cni-839000" ...
	I0919 12:32:12.629548    6984 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:32:12.629582    6984 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:d0:b2:cb:34:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2
	I0919 12:32:12.631654    6984 main.go:141] libmachine: STDOUT: 
	I0919 12:32:12.631679    6984 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:32:12.631712    6984 fix.go:56] duration metric: took 14.466166ms for fixHost
	I0919 12:32:12.631717    6984 start.go:83] releasing machines lock for "newest-cni-839000", held for 14.483875ms
	W0919 12:32:12.631726    6984 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:32:12.631756    6984 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:32:12.631761    6984 start.go:729] Will try again in 5 seconds ...
	I0919 12:32:17.633415    6984 start.go:360] acquireMachinesLock for newest-cni-839000: {Name:mk1705197fc32666922247336fab48814e1aa2c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 12:32:17.633795    6984 start.go:364] duration metric: took 294.75µs to acquireMachinesLock for "newest-cni-839000"
	I0919 12:32:17.633940    6984 start.go:96] Skipping create...Using existing machine configuration
	I0919 12:32:17.633958    6984 fix.go:54] fixHost starting: 
	I0919 12:32:17.634686    6984 fix.go:112] recreateIfNeeded on newest-cni-839000: state=Stopped err=<nil>
	W0919 12:32:17.634716    6984 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 12:32:17.647523    6984 out.go:177] * Restarting existing qemu2 VM for "newest-cni-839000" ...
	I0919 12:32:17.652317    6984 qemu.go:418] Using hvf for hardware acceleration
	I0919 12:32:17.652507    6984 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:d0:b2:cb:34:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19664-1099/.minikube/machines/newest-cni-839000/disk.qcow2
	I0919 12:32:17.662059    6984 main.go:141] libmachine: STDOUT: 
	I0919 12:32:17.662150    6984 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 12:32:17.662237    6984 fix.go:56] duration metric: took 28.274417ms for fixHost
	I0919 12:32:17.662262    6984 start.go:83] releasing machines lock for "newest-cni-839000", held for 28.446708ms
	W0919 12:32:17.662468    6984 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-839000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-839000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 12:32:17.670487    6984 out.go:201] 
	W0919 12:32:17.674643    6984 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 12:32:17.674673    6984 out.go:270] * 
	* 
	W0919 12:32:17.677369    6984 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 12:32:17.686440    6984 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-839000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-839000 -n newest-cni-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-839000 -n newest-cni-839000: exit status 7 (68.073667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-839000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-839000 -n newest-cni-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-839000 -n newest-cni-839000: exit status 7 (29.389458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-839000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-839000 --alsologtostderr -v=1: exit status 83 (41.229ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-839000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-839000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 12:32:17.869762    7003 out.go:345] Setting OutFile to fd 1 ...
	I0919 12:32:17.869902    7003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:32:17.869905    7003 out.go:358] Setting ErrFile to fd 2...
	I0919 12:32:17.869908    7003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 12:32:17.870030    7003 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 12:32:17.870226    7003 out.go:352] Setting JSON to false
	I0919 12:32:17.870233    7003 mustload.go:65] Loading cluster: newest-cni-839000
	I0919 12:32:17.870461    7003 config.go:182] Loaded profile config "newest-cni-839000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 12:32:17.874423    7003 out.go:177] * The control-plane node newest-cni-839000 host is not running: state=Stopped
	I0919 12:32:17.878442    7003 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-839000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-839000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-839000 -n newest-cni-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-839000 -n newest-cni-839000: exit status 7 (29.824625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-839000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-839000 -n newest-cni-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-839000 -n newest-cni-839000: exit status 7 (29.804792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 8.57
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 199.22
29 TestAddons/serial/Volcano 38.39
31 TestAddons/serial/GCPAuth/Namespaces 0.1
34 TestAddons/parallel/Ingress 19.01
35 TestAddons/parallel/InspektorGadget 10.31
36 TestAddons/parallel/MetricsServer 6.29
39 TestAddons/parallel/CSI 59.38
40 TestAddons/parallel/Headlamp 15.64
41 TestAddons/parallel/CloudSpanner 5.2
42 TestAddons/parallel/LocalPath 52
43 TestAddons/parallel/NvidiaDevicePlugin 5.16
44 TestAddons/parallel/Yakd 10.3
45 TestAddons/StoppedEnableDisable 9.42
53 TestHyperKitDriverInstallOrUpdate 11.44
56 TestErrorSpam/setup 34.71
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.25
59 TestErrorSpam/pause 0.69
60 TestErrorSpam/unpause 0.62
61 TestErrorSpam/stop 55.32
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 47.66
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.3
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.64
73 TestFunctional/serial/CacheCmd/cache/add_local 1.18
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.68
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 1.85
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.03
81 TestFunctional/serial/ExtraConfig 37.94
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.65
84 TestFunctional/serial/LogsFileCmd 0.6
85 TestFunctional/serial/InvalidService 3.61
87 TestFunctional/parallel/ConfigCmd 0.22
88 TestFunctional/parallel/DashboardCmd 12.62
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.25
96 TestFunctional/parallel/AddonsCmd 0.09
97 TestFunctional/parallel/PersistentVolumeClaim 25.58
99 TestFunctional/parallel/SSHCmd 0.13
100 TestFunctional/parallel/CpCmd 0.43
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.41
107 TestFunctional/parallel/NodeLabels 0.05
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.13
111 TestFunctional/parallel/License 0.34
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.19
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.87
119 TestFunctional/parallel/ImageCommands/Setup 1.75
120 TestFunctional/parallel/DockerEnv/bash 0.34
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
124 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.49
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.35
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.29
137 TestFunctional/parallel/ServiceCmd/List 0.13
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.1
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
140 TestFunctional/parallel/ServiceCmd/Format 0.11
141 TestFunctional/parallel/ServiceCmd/URL 0.1
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
149 TestFunctional/parallel/ProfileCmd/profile_list 0.13
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.13
151 TestFunctional/parallel/MountCmd/any-port 5
152 TestFunctional/parallel/MountCmd/specific-port 0.79
153 TestFunctional/parallel/MountCmd/VerifyCleanup 0.72
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 181.68
161 TestMultiControlPlane/serial/DeployApp 4.55
162 TestMultiControlPlane/serial/PingHostFromPods 0.74
163 TestMultiControlPlane/serial/AddWorkerNode 52.59
164 TestMultiControlPlane/serial/NodeLabels 0.12
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.29
166 TestMultiControlPlane/serial/CopyFile 4.15
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.23
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 2.1
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 1.15
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.42
277 TestNoKubernetes/serial/Stop 3.52
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
294 TestStartStop/group/old-k8s-version/serial/Stop 3.35
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/no-preload/serial/Stop 3.45
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
318 TestStartStop/group/embed-certs/serial/Stop 3.67
321 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.4
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
338 TestStartStop/group/newest-cni/serial/Stop 2.08
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0919 11:38:27.179325    1618 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0919 11:38:27.179678    1618 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-629000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-629000: exit status 85 (96.379541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-629000 | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT |          |
	|         | -p download-only-629000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 11:38:13
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 11:38:13.631328    1620 out.go:345] Setting OutFile to fd 1 ...
	I0919 11:38:13.631467    1620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:38:13.631470    1620 out.go:358] Setting ErrFile to fd 2...
	I0919 11:38:13.631472    1620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:38:13.631592    1620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	W0919 11:38:13.631683    1620 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19664-1099/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19664-1099/.minikube/config/config.json: no such file or directory
	I0919 11:38:13.632891    1620 out.go:352] Setting JSON to true
	I0919 11:38:13.650992    1620 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":458,"bootTime":1726770635,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 11:38:13.651047    1620 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 11:38:13.657258    1620 out.go:97] [download-only-629000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 11:38:13.657418    1620 notify.go:220] Checking for updates...
	W0919 11:38:13.657467    1620 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 11:38:13.660116    1620 out.go:169] MINIKUBE_LOCATION=19664
	I0919 11:38:13.663194    1620 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 11:38:13.667399    1620 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 11:38:13.670128    1620 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 11:38:13.673144    1620 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	W0919 11:38:13.679218    1620 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 11:38:13.679438    1620 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 11:38:13.684171    1620 out.go:97] Using the qemu2 driver based on user configuration
	I0919 11:38:13.684188    1620 start.go:297] selected driver: qemu2
	I0919 11:38:13.684202    1620 start.go:901] validating driver "qemu2" against <nil>
	I0919 11:38:13.684281    1620 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 11:38:13.687136    1620 out.go:169] Automatically selected the socket_vmnet network
	I0919 11:38:13.693341    1620 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0919 11:38:13.693424    1620 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 11:38:13.693456    1620 cni.go:84] Creating CNI manager for ""
	I0919 11:38:13.693495    1620 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 11:38:13.693553    1620 start.go:340] cluster config:
	{Name:download-only-629000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 11:38:13.698911    1620 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 11:38:13.703497    1620 out.go:97] Downloading VM boot image ...
	I0919 11:38:13.703513    1620 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso
	I0919 11:38:20.115451    1620 out.go:97] Starting "download-only-629000" primary control-plane node in "download-only-629000" cluster
	I0919 11:38:20.115469    1620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0919 11:38:20.177351    1620 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0919 11:38:20.177363    1620 cache.go:56] Caching tarball of preloaded images
	I0919 11:38:20.177507    1620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0919 11:38:20.180351    1620 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0919 11:38:20.180370    1620 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 11:38:20.276620    1620 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0919 11:38:25.893014    1620 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 11:38:25.893160    1620 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 11:38:26.588978    1620 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0919 11:38:26.589182    1620 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/download-only-629000/config.json ...
	I0919 11:38:26.589199    1620 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/download-only-629000/config.json: {Name:mkb0fe49d0d203e8cbca1874a28797fe699f16a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 11:38:26.589435    1620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0919 11:38:26.589626    1620 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0919 11:38:27.126190    1620 out.go:193] 
	W0919 11:38:27.135191    1620 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19664-1099/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106e49780 0x106e49780 0x106e49780 0x106e49780 0x106e49780 0x106e49780 0x106e49780] Decompressors:map[bz2:0x140004e1a50 gz:0x140004e1a58 tar:0x140004e1a00 tar.bz2:0x140004e1a10 tar.gz:0x140004e1a20 tar.xz:0x140004e1a30 tar.zst:0x140004e1a40 tbz2:0x140004e1a10 tgz:0x140004e1a20 txz:0x140004e1a30 tzst:0x140004e1a40 xz:0x140004e1a60 zip:0x140004e1a70 zst:0x140004e1a68] Getters:map[file:0x140003f4810 http:0x1400079c3c0 https:0x1400079c410] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0919 11:38:27.135220    1620 out_reason.go:110] 
	W0919 11:38:27.144272    1620 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 11:38:27.148228    1620 out.go:193] 
	
	
	* The control-plane node download-only-629000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-629000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-629000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-556000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-556000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (8.564922416s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0919 11:38:36.096554    1618 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0919 11:38:36.096625    1618 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-556000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-556000: exit status 85 (77.085792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-629000 | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT |                     |
	|         | -p download-only-629000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT | 19 Sep 24 11:38 PDT |
	| delete  | -p download-only-629000        | download-only-629000 | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT | 19 Sep 24 11:38 PDT |
	| start   | -o=json --download-only        | download-only-556000 | jenkins | v1.34.0 | 19 Sep 24 11:38 PDT |                     |
	|         | -p download-only-556000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 11:38:27
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 11:38:27.560311    1652 out.go:345] Setting OutFile to fd 1 ...
	I0919 11:38:27.560460    1652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:38:27.560464    1652 out.go:358] Setting ErrFile to fd 2...
	I0919 11:38:27.560466    1652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:38:27.560592    1652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 11:38:27.561844    1652 out.go:352] Setting JSON to true
	I0919 11:38:27.579560    1652 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":472,"bootTime":1726770635,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 11:38:27.579636    1652 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 11:38:27.583677    1652 out.go:97] [download-only-556000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 11:38:27.583788    1652 notify.go:220] Checking for updates...
	I0919 11:38:27.587615    1652 out.go:169] MINIKUBE_LOCATION=19664
	I0919 11:38:27.590717    1652 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 11:38:27.594674    1652 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 11:38:27.597708    1652 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 11:38:27.600698    1652 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	W0919 11:38:27.606660    1652 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 11:38:27.606830    1652 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 11:38:27.609664    1652 out.go:97] Using the qemu2 driver based on user configuration
	I0919 11:38:27.609673    1652 start.go:297] selected driver: qemu2
	I0919 11:38:27.609676    1652 start.go:901] validating driver "qemu2" against <nil>
	I0919 11:38:27.609723    1652 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 11:38:27.612547    1652 out.go:169] Automatically selected the socket_vmnet network
	I0919 11:38:27.617770    1652 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0919 11:38:27.617877    1652 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 11:38:27.617897    1652 cni.go:84] Creating CNI manager for ""
	I0919 11:38:27.617922    1652 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 11:38:27.617928    1652 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 11:38:27.617970    1652 start.go:340] cluster config:
	{Name:download-only-556000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-556000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 11:38:27.621398    1652 iso.go:125] acquiring lock: {Name:mk32fbcde39346eed141639a1563e8d5b6be8aff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 11:38:27.624655    1652 out.go:97] Starting "download-only-556000" primary control-plane node in "download-only-556000" cluster
	I0919 11:38:27.624665    1652 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 11:38:27.687483    1652 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 11:38:27.687522    1652 cache.go:56] Caching tarball of preloaded images
	I0919 11:38:27.687709    1652 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 11:38:27.692782    1652 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0919 11:38:27.692791    1652 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0919 11:38:27.781739    1652 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19664-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-556000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-556000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-556000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-700000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-700000: exit status 85 (66.010417ms)

                                                
                                                
-- stdout --
	* Profile "addons-700000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-700000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-700000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-700000: exit status 85 (62.1475ms)

                                                
                                                
-- stdout --
	* Profile "addons-700000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-700000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (199.22s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-700000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-700000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m19.221805167s)
--- PASS: TestAddons/Setup (199.22s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 6.678166ms
addons_test.go:905: volcano-admission stabilized in 6.719666ms
addons_test.go:897: volcano-scheduler stabilized in 6.742083ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-kdg6r" [c94a43d1-faa9-4d37-8b30-a4c7e5ad0e1b] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00477575s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-9ndvl" [07c73b4a-82d5-4d17-9e07-6158dfd21c18] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.009704709s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-hb7tx" [c07cddfc-39db-49a1-bce4-55be423a4108] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004758375s
addons_test.go:932: (dbg) Run:  kubectl --context addons-700000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-700000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-700000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [267a9b38-4283-416b-ae5b-930aac152f38] Pending
helpers_test.go:344: "test-job-nginx-0" [267a9b38-4283-416b-ae5b-930aac152f38] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [267a9b38-4283-416b-ae5b-930aac152f38] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.009863833s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-700000 addons disable volcano --alsologtostderr -v=1: (10.153132958s)
--- PASS: TestAddons/serial/Volcano (38.39s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-700000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-700000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-700000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-700000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-700000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9cc0be1c-32f0-4514-ac38-f55e21a21d78] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9cc0be1c-32f0-4514-ac38-f55e21a21d78] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.010411167s
I0919 11:52:38.401305    1618 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-700000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-arm64 -p addons-700000 addons disable ingress-dns --alsologtostderr -v=1: (1.113537958s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-700000 addons disable ingress --alsologtostderr -v=1: (7.267786334s)
--- PASS: TestAddons/parallel/Ingress (19.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-b7hcq" [69357b99-4c7a-4035-8f01-238311d7da0f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011921375s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-700000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-700000: (5.30234275s)
--- PASS: TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.302542ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-kwrlr" [ed9b97be-9528-4e60-ba30-1c4859ac01d1] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.011848167s
addons_test.go:417: (dbg) Run:  kubectl --context addons-700000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0919 11:50:35.815617    1618 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.092625ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-700000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-700000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8438b747-402f-4178-bea2-dc63903e34b9] Pending
helpers_test.go:344: "task-pv-pod" [8438b747-402f-4178-bea2-dc63903e34b9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8438b747-402f-4178-bea2-dc63903e34b9] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.005906458s
addons_test.go:590: (dbg) Run:  kubectl --context addons-700000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-700000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-700000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-700000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-700000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-700000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-700000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [307e5b62-ac10-4a50-8eb7-ebebe772a816] Pending
helpers_test.go:344: "task-pv-pod-restore" [307e5b62-ac10-4a50-8eb7-ebebe772a816] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [307e5b62-ac10-4a50-8eb7-ebebe772a816] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.010999s
addons_test.go:632: (dbg) Run:  kubectl --context addons-700000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-700000 delete pod task-pv-pod-restore: (1.245698583s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-700000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-700000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-700000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.134341209s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-700000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-bpdmp" [ebf61592-654e-477c-97a9-532855953d20] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-bpdmp" [ebf61592-654e-477c-97a9-532855953d20] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.005487959s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-700000 addons disable headlamp --alsologtostderr -v=1: (5.259238s)
--- PASS: TestAddons/parallel/Headlamp (15.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-rmrtd" [5654323c-b000-44b9-af1b-a02d99cf4eae] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005929458s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-700000
--- PASS: TestAddons/parallel/CloudSpanner (5.20s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-700000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-700000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-700000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [95ccce05-b2d8-4d40-baf3-83b3307ce195] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [95ccce05-b2d8-4d40-baf3-83b3307ce195] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [95ccce05-b2d8-4d40-baf3-83b3307ce195] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004278667s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-700000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 ssh "cat /opt/local-path-provisioner/pvc-a192fc67-4276-49e5-9a57-481df923c89f_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-700000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-700000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-700000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.498741292s)
--- PASS: TestAddons/parallel/LocalPath (52.00s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-55z9t" [89442d59-5a01-4b79-b0f5-c99cea04d337] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005537917s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-700000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-qbvgn" [16d19d45-47ec-4d58-b51b-8f83c981c5c1] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.009519s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-700000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-700000 addons disable yakd --alsologtostderr -v=1: (5.287996708s)
--- PASS: TestAddons/parallel/Yakd (10.30s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (9.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-700000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-700000: (9.228394125s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-700000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-700000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-700000
--- PASS: TestAddons/StoppedEnableDisable (9.42s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.44s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0919 12:17:34.968337    1618 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 12:17:34.968520    1618 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W0919 12:17:36.905738    1618 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0919 12:17:36.905999    1618 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0919 12:17:36.906048    1618 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/001/docker-machine-driver-hyperkit
I0919 12:17:37.407069    1618 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10898ad40 0x10898ad40 0x10898ad40 0x10898ad40 0x10898ad40 0x10898ad40 0x10898ad40] Decompressors:map[bz2:0x14000483a30 gz:0x14000483a38 tar:0x140004839e0 tar.bz2:0x140004839f0 tar.gz:0x14000483a00 tar.xz:0x14000483a10 tar.zst:0x14000483a20 tbz2:0x140004839f0 tgz:0x14000483a00 txz:0x14000483a10 tzst:0x14000483a20 xz:0x14000483a40 zip:0x14000483a50 zst:0x14000483a48] Getters:map[file:0x14000065d10 http:0x140007dbbd0 https:0x140007dbc20] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0919 12:17:37.407225    1618 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3040669560/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (11.44s)

                                                
                                    
x
+
TestErrorSpam/setup (34.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-226000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-226000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 --driver=qemu2 : (34.709762875s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (34.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 pause
--- PASS: TestErrorSpam/pause (0.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (55.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 stop: (3.19392925s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 stop: (26.057418083s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-226000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-226000 stop: (26.061180583s)
--- PASS: TestErrorSpam/stop (55.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19664-1099/.minikube/files/etc/test/nested/copy/1618/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-569000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-569000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.659753s)
--- PASS: TestFunctional/serial/StartWithProxy (47.66s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 11:55:17.541624    1618 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-569000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-569000 --alsologtostderr -v=8: (38.297236625s)
functional_test.go:663: soft start took 38.297677125s for "functional-569000" cluster.
I0919 11:55:55.838082    1618 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (38.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-569000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-569000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2760946710/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 cache add minikube-local-cache-test:functional-569000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 cache delete minikube-local-cache-test:functional-569000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-569000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-569000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (69.704083ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 kubectl -- --context functional-569000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-569000 kubectl -- --context functional-569000 get pods: (1.847672833s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-569000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-569000 get pods: (1.026907375s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.03s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-569000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-569000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.935471708s)
functional_test.go:761: restart took 37.935557084s for "functional-569000" cluster.
I0919 11:56:41.424766    1618 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (37.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-569000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd209306826/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.61s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-569000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-569000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-569000: exit status 115 (144.854292ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30203 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-569000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.61s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-569000 config get cpus: exit status 14 (30.730625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-569000 config get cpus: exit status 14 (30.425459ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-569000 --alsologtostderr -v=1]
E0919 11:57:37.085647    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-569000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2897: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-569000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-569000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (120.354875ms)

                                                
                                                
-- stdout --
	* [functional-569000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 11:57:32.158114    2880 out.go:345] Setting OutFile to fd 1 ...
	I0919 11:57:32.158257    2880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:57:32.158261    2880 out.go:358] Setting ErrFile to fd 2...
	I0919 11:57:32.158263    2880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:57:32.158386    2880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 11:57:32.159436    2880 out.go:352] Setting JSON to false
	I0919 11:57:32.177175    2880 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1617,"bootTime":1726770635,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 11:57:32.177257    2880 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 11:57:32.181663    2880 out.go:177] * [functional-569000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0919 11:57:32.192891    2880 notify.go:220] Checking for updates...
	I0919 11:57:32.195865    2880 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 11:57:32.198793    2880 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 11:57:32.202765    2880 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 11:57:32.205804    2880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 11:57:32.208699    2880 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 11:57:32.211785    2880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 11:57:32.215075    2880 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 11:57:32.215351    2880 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 11:57:32.218696    2880 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 11:57:32.225758    2880 start.go:297] selected driver: qemu2
	I0919 11:57:32.225765    2880 start.go:901] validating driver "qemu2" against &{Name:functional-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 11:57:32.225819    2880 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 11:57:32.232718    2880 out.go:201] 
	W0919 11:57:32.236779    2880 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 11:57:32.240782    2880 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-569000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-569000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-569000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.720708ms)

                                                
                                                
-- stdout --
	* [functional-569000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 11:57:32.388602    2891 out.go:345] Setting OutFile to fd 1 ...
	I0919 11:57:32.388709    2891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:57:32.388711    2891 out.go:358] Setting ErrFile to fd 2...
	I0919 11:57:32.388714    2891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 11:57:32.388845    2891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
	I0919 11:57:32.390215    2891 out.go:352] Setting JSON to false
	I0919 11:57:32.407903    2891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1617,"bootTime":1726770635,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0919 11:57:32.407999    2891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0919 11:57:32.412828    2891 out.go:177] * [functional-569000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0919 11:57:32.420619    2891 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 11:57:32.420660    2891 notify.go:220] Checking for updates...
	I0919 11:57:32.427794    2891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	I0919 11:57:32.429085    2891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 11:57:32.431751    2891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 11:57:32.434831    2891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	I0919 11:57:32.437814    2891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 11:57:32.441155    2891 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 11:57:32.441421    2891 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 11:57:32.445799    2891 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0919 11:57:32.452773    2891 start.go:297] selected driver: qemu2
	I0919 11:57:32.452780    2891 start.go:901] validating driver "qemu2" against &{Name:functional-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 11:57:32.452839    2891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 11:57:32.459804    2891 out.go:201] 
	W0919 11:57:32.463770    2891 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 11:57:32.467755    2891 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fde2041c-9920-476a-8be2-e7a8dde3b10d] Running
E0919 11:57:01.237223    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005276416s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-569000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-569000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-569000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-569000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c2437eb6-6f44-4d20-bbae-c1479a6b2aa8] Pending
helpers_test.go:344: "sp-pod" [c2437eb6-6f44-4d20-bbae-c1479a6b2aa8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0919 11:57:06.361044    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [c2437eb6-6f44-4d20-bbae-c1479a6b2aa8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.01142075s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-569000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-569000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-569000 delete -f testdata/storage-provisioner/pod.yaml: (1.062865208s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-569000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [97625e37-c2e7-4af2-b751-1cd7b2197081] Pending
helpers_test.go:344: "sp-pod" [97625e37-c2e7-4af2-b751-1cd7b2197081] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [97625e37-c2e7-4af2-b751-1cd7b2197081] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009686875s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-569000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh -n functional-569000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 cp functional-569000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1907921818/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh -n functional-569000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh -n functional-569000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1618/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "sudo cat /etc/test/nested/copy/1618/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1618.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "sudo cat /etc/ssl/certs/1618.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1618.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "sudo cat /usr/share/ca-certificates/1618.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16182.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "sudo cat /etc/ssl/certs/16182.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16182.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "sudo cat /usr/share/ca-certificates/16182.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-569000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-569000 ssh "sudo systemctl is-active crio": exit status 1 (132.920834ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-569000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-569000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-569000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-569000 image ls --format short --alsologtostderr:
I0919 11:57:40.562739    2919 out.go:345] Setting OutFile to fd 1 ...
I0919 11:57:40.562905    2919 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 11:57:40.562908    2919 out.go:358] Setting ErrFile to fd 2...
I0919 11:57:40.562911    2919 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 11:57:40.563044    2919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
I0919 11:57:40.563502    2919 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 11:57:40.563563    2919 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 11:57:40.564373    2919 ssh_runner.go:195] Run: systemctl --version
I0919 11:57:40.564381    2919 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/functional-569000/id_rsa Username:docker}
I0919 11:57:40.591470    2919 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-569000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kicbase/echo-server               | functional-569000 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| localhost/my-image                          | functional-569000 | b188ce1768fd1 | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-569000 | 666bf5ea6ac62 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-569000 image ls --format table --alsologtostderr:
I0919 11:57:42.650171    2931 out.go:345] Setting OutFile to fd 1 ...
I0919 11:57:42.650340    2931 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 11:57:42.650343    2931 out.go:358] Setting ErrFile to fd 2...
I0919 11:57:42.650345    2931 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 11:57:42.650475    2931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
I0919 11:57:42.650918    2931 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 11:57:42.650978    2931 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 11:57:42.651856    2931 ssh_runner.go:195] Run: systemctl --version
I0919 11:57:42.651865    2931 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/functional-569000/id_rsa Username:docker}
I0919 11:57:42.678083    2931 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/09/19 11:57:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-569000 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"666bf5ea6ac6250ed92977535ae5d1949ec01d662fa41ffb850c54691caa22d5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-569000"],"size":"30"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e9823
56f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"b188ce1768fd1e7a935476315d66578466d5e6f94d7525bab55267cf07b3324b","repoDigests":[],"repoTags":["localhost/my-image:functional-569000"],"size":"1410000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74c
c91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-569000"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/paus
e:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-569000 image ls --format json --alsologtostderr:
I0919 11:57:42.579896    2929 out.go:345] Setting OutFile to fd 1 ...
I0919 11:57:42.580066    2929 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 11:57:42.580069    2929 out.go:358] Setting ErrFile to fd 2...
I0919 11:57:42.580072    2929 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 11:57:42.580193    2929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
I0919 11:57:42.580647    2929 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 11:57:42.580711    2929 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 11:57:42.581567    2929 ssh_runner.go:195] Run: systemctl --version
I0919 11:57:42.581582    2929 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/functional-569000/id_rsa Username:docker}
I0919 11:57:42.607949    2929 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-569000 image ls --format yaml --alsologtostderr:
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-569000
size: "4780000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 666bf5ea6ac6250ed92977535ae5d1949ec01d662fa41ffb850c54691caa22d5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-569000
size: "30"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-569000 image ls --format yaml --alsologtostderr:
I0919 11:57:40.637304    2921 out.go:345] Setting OutFile to fd 1 ...
I0919 11:57:40.637445    2921 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 11:57:40.637448    2921 out.go:358] Setting ErrFile to fd 2...
I0919 11:57:40.637451    2921 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 11:57:40.637588    2921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
I0919 11:57:40.638013    2921 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 11:57:40.638077    2921 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 11:57:40.638891    2921 ssh_runner.go:195] Run: systemctl --version
I0919 11:57:40.638898    2921 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/functional-569000/id_rsa Username:docker}
I0919 11:57:40.665176    2921 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-569000 ssh pgrep buildkitd: exit status 1 (62.963667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image build -t localhost/my-image:functional-569000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-569000 image build -t localhost/my-image:functional-569000 testdata/build --alsologtostderr: (1.729885375s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-569000 image build -t localhost/my-image:functional-569000 testdata/build --alsologtostderr:
I0919 11:57:40.772285    2925 out.go:345] Setting OutFile to fd 1 ...
I0919 11:57:40.772519    2925 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 11:57:40.772522    2925 out.go:358] Setting ErrFile to fd 2...
I0919 11:57:40.772525    2925 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 11:57:40.772690    2925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19664-1099/.minikube/bin
I0919 11:57:40.773155    2925 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 11:57:40.773881    2925 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 11:57:40.774772    2925 ssh_runner.go:195] Run: systemctl --version
I0919 11:57:40.774780    2925 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19664-1099/.minikube/machines/functional-569000/id_rsa Username:docker}
I0919 11:57:40.801324    2925 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.377164252.tar
I0919 11:57:40.801398    2925 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 11:57:40.805210    2925 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.377164252.tar
I0919 11:57:40.806779    2925 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.377164252.tar: stat -c "%s %y" /var/lib/minikube/build/build.377164252.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.377164252.tar': No such file or directory
I0919 11:57:40.806792    2925 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.377164252.tar --> /var/lib/minikube/build/build.377164252.tar (3072 bytes)
I0919 11:57:40.814876    2925 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.377164252
I0919 11:57:40.818321    2925 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.377164252 -xf /var/lib/minikube/build/build.377164252.tar
I0919 11:57:40.821443    2925 docker.go:360] Building image: /var/lib/minikube/build/build.377164252
I0919 11:57:40.821499    2925 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-569000 /var/lib/minikube/build/build.377164252
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:b188ce1768fd1e7a935476315d66578466d5e6f94d7525bab55267cf07b3324b done
#8 naming to localhost/my-image:functional-569000 done
#8 DONE 0.0s
I0919 11:57:42.406524    2925 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-569000 /var/lib/minikube/build/build.377164252: (1.585053666s)
I0919 11:57:42.406627    2925 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.377164252
I0919 11:57:42.410651    2925 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.377164252.tar
I0919 11:57:42.413892    2925 build_images.go:217] Built localhost/my-image:functional-569000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.377164252.tar
I0919 11:57:42.413908    2925 build_images.go:133] succeeded building to: functional-569000
I0919 11:57:42.413912    2925 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.726381166s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-569000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-569000 docker-env) && out/minikube-darwin-arm64 status -p functional-569000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-569000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-569000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-569000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-m6fbq" [55ef2b4f-9eb4-493a-b51d-3822a3ca586c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-m6fbq" [55ef2b4f-9eb4-493a-b51d-3822a3ca586c] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0919 11:56:56.091434    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
E0919 11:56:56.099005    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
E0919 11:56:56.110547    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
E0919 11:56:56.133963    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
E0919 11:56:56.177378    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
E0919 11:56:56.260758    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
E0919 11:56:56.424138    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
E0919 11:56:56.747236    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
E0919 11:56:57.389476    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.009748292s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image load --daemon kicbase/echo-server:functional-569000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image load --daemon kicbase/echo-server:functional-569000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-569000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image load --daemon kicbase/echo-server:functional-569000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image save kicbase/echo-server:functional-569000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image rm kicbase/echo-server:functional-569000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-569000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 image save --daemon kicbase/echo-server:functional-569000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-569000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-569000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-569000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-569000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2731: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-569000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-569000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-569000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6b202623-289c-46f4-842f-d434f00ebd18] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6b202623-289c-46f4-842f-d434f00ebd18] Running
E0919 11:56:58.673298    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00844875s
I0919 11:57:03.145423    1618 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 service list -o json
functional_test.go:1494: Took "96.642625ms" to run "out/minikube-darwin-arm64 -p functional-569000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30735
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30735
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-569000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.215.22 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0919 11:57:03.227641    1618 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0919 11:57:03.265346    1618 config.go:182] Loaded profile config "functional-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-569000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "96.490417ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "35.509166ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "99.174291ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.451375ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-569000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2215059764/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726772245621732000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2215059764/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726772245621732000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2215059764/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726772245621732000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2215059764/001/test-1726772245621732000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-569000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.533709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 11:57:25.686719    1618 retry.go:31] will retry after 291.181825ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 18:57 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 18:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 18:57 test-1726772245621732000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh cat /mount-9p/test-1726772245621732000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-569000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0dd69c6b-7567-4c72-bb5e-a1ea854310ad] Pending
helpers_test.go:344: "busybox-mount" [0dd69c6b-7567-4c72-bb5e-a1ea854310ad] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0dd69c6b-7567-4c72-bb5e-a1ea854310ad] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0dd69c6b-7567-4c72-bb5e-a1ea854310ad] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.0043465s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-569000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-569000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2215059764/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-569000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port178230314/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-569000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (67.623666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 11:57:30.691616    1618 retry.go:31] will retry after 288.530627ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-569000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port178230314/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-569000 ssh "sudo umount -f /mount-9p": exit status 1 (64.432375ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-569000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-569000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port178230314/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-569000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2428917011/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-569000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2428917011/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-569000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2428917011/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-569000 ssh "findmnt -T" /mount1: exit status 1 (84.0725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 11:57:31.501538    1618 retry.go:31] will retry after 375.564498ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-569000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-569000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-569000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2428917011/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-569000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2428917011/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-569000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2428917011/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-569000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-569000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-569000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (181.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-056000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0919 11:58:18.046870    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
E0919 11:59:39.968213    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/addons-700000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-056000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m1.498504333s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (181.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-056000 -- rollout status deployment/busybox: (2.906045792s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-2xzs5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-sfmp7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-vfwhz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-2xzs5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-sfmp7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-vfwhz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-2xzs5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-sfmp7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-vfwhz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-2xzs5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-2xzs5 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-sfmp7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-sfmp7 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-vfwhz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-vfwhz -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-056000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-056000 -v=7 --alsologtostderr: (52.377161042s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-056000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp testdata/cp-test.txt ha-056000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2840061890/001/cp-test_ha-056000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000:/home/docker/cp-test.txt ha-056000-m02:/home/docker/cp-test_ha-056000_ha-056000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test_ha-056000_ha-056000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000:/home/docker/cp-test.txt ha-056000-m03:/home/docker/cp-test_ha-056000_ha-056000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test_ha-056000_ha-056000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000:/home/docker/cp-test.txt ha-056000-m04:/home/docker/cp-test_ha-056000_ha-056000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test_ha-056000_ha-056000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp testdata/cp-test.txt ha-056000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2840061890/001/cp-test_ha-056000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m02:/home/docker/cp-test.txt ha-056000:/home/docker/cp-test_ha-056000-m02_ha-056000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test_ha-056000-m02_ha-056000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m02:/home/docker/cp-test.txt ha-056000-m03:/home/docker/cp-test_ha-056000-m02_ha-056000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test_ha-056000-m02_ha-056000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m02:/home/docker/cp-test.txt ha-056000-m04:/home/docker/cp-test_ha-056000-m02_ha-056000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test_ha-056000-m02_ha-056000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp testdata/cp-test.txt ha-056000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2840061890/001/cp-test_ha-056000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m03:/home/docker/cp-test.txt ha-056000:/home/docker/cp-test_ha-056000-m03_ha-056000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test.txt"
E0919 12:01:47.827981    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:01:47.835679    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:01:47.849073    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
E0919 12:01:47.871808    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test_ha-056000-m03_ha-056000.txt"
E0919 12:01:47.915016    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m03:/home/docker/cp-test.txt ha-056000-m02:/home/docker/cp-test_ha-056000-m03_ha-056000-m02.txt
E0919 12:01:47.998603    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test_ha-056000-m03_ha-056000-m02.txt"
E0919 12:01:48.161950    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m03:/home/docker/cp-test.txt ha-056000-m04:/home/docker/cp-test_ha-056000-m03_ha-056000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test_ha-056000-m03_ha-056000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp testdata/cp-test.txt ha-056000-m04:/home/docker/cp-test.txt
E0919 12:01:48.485424    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2840061890/001/cp-test_ha-056000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m04:/home/docker/cp-test.txt ha-056000:/home/docker/cp-test_ha-056000-m04_ha-056000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test_ha-056000-m04_ha-056000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m04:/home/docker/cp-test.txt ha-056000-m02:/home/docker/cp-test_ha-056000-m04_ha-056000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test_ha-056000-m04_ha-056000-m02.txt"
E0919 12:01:49.127443    1618 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19664-1099/.minikube/profiles/functional-569000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m04:/home/docker/cp-test.txt ha-056000-m03:/home/docker/cp-test_ha-056000-m04_ha-056000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test_ha-056000-m04_ha-056000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (3.22949725s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-885000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-885000 --output=json --user=testUser: (2.098041334s)
--- PASS: TestJSONOutput/stop/Command (2.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-202000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-202000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.932292ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2b3217ff-50d6-4302-88e7-84105fcad522","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-202000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4990b8cc-265d-4cdb-84fd-9d587a3b627d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"69ac7e4f-6e1e-479a-b228-f2b503c25db4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig"}}
	{"specversion":"1.0","id":"4ac15f67-c6a8-4e04-bcbc-70b1db076bc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"93ec0b81-d093-43a6-83a4-f468e1227c0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eb8ddf52-fb9c-4a55-bbcf-888022122e20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube"}}
	{"specversion":"1.0","id":"828d2539-c925-47d5-89f6-0cea399dda0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a0871a1-866d-4a41-b45c-c39ebcddba59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-202000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-202000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.750583ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-562000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19664-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19664-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-562000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-562000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.575458ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-562000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-562000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.757409417s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.666315209s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-562000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-562000: (3.52371075s)
--- PASS: TestNoKubernetes/serial/Stop (3.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-562000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-562000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.498666ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-562000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-562000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-269000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-029000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-029000 --alsologtostderr -v=3: (3.346304667s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-029000 -n old-k8s-version-029000: exit status 7 (51.24ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-029000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-816000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-816000 --alsologtostderr -v=3: (3.454296333s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-816000 -n no-preload-816000: exit status 7 (49.885208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-816000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-850000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-850000 --alsologtostderr -v=3: (3.672534709s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-520000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-520000 --alsologtostderr -v=3: (3.396703291s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-850000 -n embed-certs-850000: exit status 7 (50.85425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-850000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-520000 -n default-k8s-diff-port-520000: exit status 7 (53.910666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-520000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-839000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-839000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-839000 --alsologtostderr -v=3: (2.079947083s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-839000 -n newest-cni-839000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-839000 -n newest-cni-839000: exit status 7 (65.122708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-839000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-342000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                
----------------------- debugLogs end: cilium-342000 [took: 2.200492709s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-342000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-342000
--- SKIP: TestNetworkPlugins/group/cilium (2.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-461000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-461000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard